Remix.run Logo
khafra 3 days ago

> Generalizing your experience to everyone else's betrays a lack of imagination.

One guy is generalizing from "they don't work for me" to "they don't work for anyone."

The other one is saying "they do work for me, therefore they do work for some people."

Note that the second of these is a logically valid generalization. Note also that it agrees with folks such as Tim Gowers, who work on novel and hard problems.

dns_snek 2 days ago | parent [-]

No, that's decidedly not what is happening here.

One is saying "I've seen an LLM spectacularly fail at basic reasoning enough times to know that LLMs don't have a general ability to think" (but they can sometimes reproduce the appearance of doing so).

The other is trying to generalize "I've seen LLMs produce convincing thought processes therefore LLMs have the general ability to think" (and not just occasionally reproduce the appearance of doing so).

And indeed, only one of these is a valid generalization.

MrScruff 2 days ago | parent | next [-]

When we say "think" in this context, do we just mean generalize? LLMs clearly generalize (you can give one a problem that is not exactly in it's training data and it can solve it), but perhaps not to the extent a human can. But then we're talking about degrees. If it was able to generalize at a higher level of abstraction maybe more people would regard it as "thinking".

dns_snek 2 days ago | parent [-]

I meant it in the same way the previous commenter did:

> Having seen LLMs so many times produce incoherent, nonsensical and invalid chains of reasoning... LLMs are little more than RNGs. They are the tea leaves and you read whatever you want into them.

Of course LLMs are capable of generating solutions that aren't in their training data sets but they don't arrive at those solutions through any sort of rigorous reasoning. This means that while their solutions can be impressive at times they're not reliable, they go down wrong paths that they can never get out of and they become less reliable the more autonomy they're given.

dagss 2 days ago | parent | next [-]

It's rather seldom that humans arrive at solutions through rigorous reasoning. The word "think" doesn't mean "rigorous reasoning" in every day language. I'm sure 99% of human decisions are pattern matching on past experience.

Even when mathematicians do in fact do rigorous reasoning, they use years to "train" first, to get experiences to pattern match from.

Workaccount2 2 days ago | parent | prev | next [-]

I have been on a crusade now for about a year to get people to share chats where SOTA LLMs have failed spectacularly to produce coherent, good information. Anything with Heavy hallucinations and outright bad information.

So far, all I have gotten is data that is outside the knowledge cutoff (this is by far the most common) and technicality wrong information (Hawsmer House instead of Hosmer House) kind of fails.

I thought maybe I hit on something with the recent BBC study about not trusting LLM output, but they used 2nd shelf/old mid-tier models to do their tests. Top LLMs correctly answered their test prompts.

I'm still holding out for one of those totally off the rails Google AI overviews hallucinations showing up in a top shelf model.

MrScruff 2 days ago | parent | prev [-]

Sure, and I’ve seen the same. But I’ve also seen the amount to which they do that decrease rapidly over time, so if that trend continues would your opinion change?

I don’t think there’s any point in comparing to human intelligence when assessing machine intelligence, there’s zero reason to think it would have similar qualities. It’s quite clear for the foreseeable future it will be far below human intelligence in many areas, while already exceeding humans in some areas that we regard as signs of intelligence.

sdenton4 2 days ago | parent | prev [-]

s/LLM/human/

dns_snek 2 days ago | parent [-]

Clever. Yes, humans can be terrible at reasoning too, but in any half decent technical workplace it's so rare for people to fail to apply logic as often and in ways that are as frustrating to deal with as LLMs. And if they are then they should be fired.

I can't say I remember a single coworker that would fit this description though many were frustrating to deal with for other reasons, of course.

cindyllm 2 days ago | parent [-]

[dead]