Remix.run Logo
The Future of Everything Is Lies, I Guess(aphyr.com)
157 points by pabs3 4 hours ago | 123 comments
drob518 22 minutes ago | parent | next [-]

> It remains unclear whether continuing to throw vast quantities of silicon and ever-bigger corpuses at the current generation of models will lead to human-equivalent capabilities. Massive increases in training costs and parameter count seem to be yielding diminishing returns. Or maybe this effect is illusory. Mysteries!

I’m not even sure whether this is possible. The current corpus used for training includes virtually all known material. If we make it illegal for these companies to use copyrighted content without remuneration, either the task gets very expensive, indeed, or the corpus shrinks. We can certainly make the models larger, with more and more parameters, subject only to silicon’s ability to give us more transistors for RAM density and GPU parallelism. But it honestly feels like, without another “Attention is All You Need” level breakthrough, we’re starting to see the end of the runway.

xmprt 7 minutes ago | parent | next [-]

I see a lot of researchers working on newer ideas so I wouldn't be surprised if we get a breakthrough in 5-10 years. After all, the gap between AlexNet and Attention is All You Need was only 6 years. And then Scaling Laws was about 3-4 years after that. It might seem like not much progress is being made but I think that's in part because AI labs are extremely secretive now when ideas are worth billions (and in the right hands, potentially more).

Of course 5-10 years is a long time to bang our heads against the wall with untenable costs but I don't know if we can solve our way out of that problem.

embedding-shape 17 minutes ago | parent | prev | next [-]

> I’m not even sure whether this is possible.

Based on what's happened so far, maybe. At least that's exactly how we got to the current iteration back in 2022/2023, quite literally "lets see what happens when we throw an enormous amount data at them while training" worked out up until one point, then post-training seems to have taken over where labs currently differ.

htrp 10 minutes ago | parent | prev [-]

We pay people to create more high quality tokens (mercor, turing) which are then fed into data generating processes (synthetic data) to create even more tokens to train on

danieltanfh95 36 minutes ago | parent | prev | next [-]

I think the discussion has to be more nuanced than this. "LLMs still can't do X so it's an idiot" is a bad line of thought. LLMs with harnesses are clearly capable of engaging with logical problems that only need text. LLMs are not there yet with images, but we are improving with UI and access to tools like figma. LLMs are clearly unable to propose new, creative solutions for problems it has never seen before.

throwaway27448 34 minutes ago | parent | next [-]

> LLMs with harnesses are clearly capable of engaging with logical problems that only need text.

To some extent. It's not clear where specifically the boundaries are, but it seems to fail to approach problems in ways that aren't embedded in the training set. I certainly would not put money on it solving an arbitrary logical problem.

__alexs 29 minutes ago | parent [-]

Solving arbitrary logical problems seems to be equivalent to solving the halting problem so you are probably wise not to make that bet.

senko 29 minutes ago | parent | prev [-]

> LLMs are not there yet with images

https://genai-showdown.specr.net/image-editing

There's been a lot of progress there, it's just that an LLM that's best for, say coding, isn't going to be also the best for image edit.

dsign 2 minutes ago | parent | prev | next [-]

> At the same time, ML models are idiots. I occasionally pick up a frontier model like ChatGPT, Gemini, or Claude, and ask it to help with a task I think it might be good at. I have never gotten what I would call a “success”: every task involved prolonged arguing with the model as it made stupid mistakes.

I have a ton of skepticism built-in when interacting with LLMs, and very good muscles for rolling my eyes, so I barely notice when I shrug a bad answer and make a derogatory inner remark about the "idiots". But the truth is, that for such an "stochastic parrot", LLMs are incredibly useful. And, when was the last time we stopped perfecting something we thought useful and valuable? When was the last time our attempts were so perfectly futile that we stopped them, invented stories about why it was impossible, and made it a social taboo to be met with derision, scorn and even ostracism? To my knowledge, in all of known human history, we have done that exactly once, and it was millennia ago.

stickfigure an hour ago | parent | prev | next [-]

I think it's too early to declare the Turing test passed. You just need to have a conversation long enough to exhaust the context window. Less than that, since response quality degrades long before you hit hard window limits. Even with compaction.

Neuroplasticity is hard to simulate in a few hundred thousand tokens.

criley2 39 minutes ago | parent [-]

For as rigorous of a Turing test as you present, I believe many (or even most) humans would also fail it.

How many humans seriously have the attention span to have a million "token" conversation with someone else and get every detail perfect without misremembering a single thing?

dairem 2 minutes ago | parent | next [-]

Doesn't the Turing test require a human too, to be compared to the AI?

stickfigure 15 minutes ago | parent | prev | next [-]

Response quality degrades long before you hit a million tokens.

But sure, let's say it doesn't. If you interact with someone day after day, you'll eventually hit a million tokens. Add some audio or images and you will exhaust the context much much faster.

However, I'll grant you that Turing's original imitation game (text only, human typist, five minutes) is probably pretty close, and that's impressive enough to call intelligence (of a sort). Though modern LLMs tend to manifest obvious dead giveaways like "you're absolutely right!"

nine_k 31 minutes ago | parent | prev [-]

But context window exhaustion does not look like mere forgetfulness, but more like loss of general coherence, like getting drunk.

beders 32 minutes ago | parent | prev | next [-]

Thank you for putting it so succinctly.

I keep explaining to my peers, friends and family that what actually is happening inside an LLM has nothing to do with conscience or agency and that the term AI is just completely overloaded right now.

rudhdb773b 4 minutes ago | parent | next [-]

> what actually is happening inside an LLM has nothing to do with conscience or agency

What makes you think natural brains are doing something so different from LLMs?

erichocean 4 minutes ago | parent | prev [-]

AI is exactly the right term: the machines can do "intelligence", and they do so artificially.

Just like we have machines that can do "math", and they do so artificially.

Or "logic", and they do so artificially.

I assume we'll drop the "artificial" part in my lifetime, since there's nothing truly artificial about it (just like math and logic): that's just how intelligence works.

dwallin an hour ago | parent | prev | next [-]

Some people point at LLMs confabulating, as if this wasn’t something humans are already widely known for doing.

I consider it highly plausible that confabulation is inherent to scaling intelligence. In order to run computation on data that due to dimensionality is computationally infeasible, you will most likely need to create a lower dimensional representation and do the computation on that. Collapsing the dimensionality is going to be lossy, which means it will have gaps between what it thinks is the reality and what is.

n4r9 an hour ago | parent | next [-]

The concern for me about LLMs confabulating is not that humans don't do it. It's that the massive scale at which LLMs will inevitably be deployed makes even the smallest confabulation extremely risky.

NiloCK 26 minutes ago | parent [-]

I don't understand this. Many small errors distributed across a large deployment sounds a lot like normal mode of error prone humans / cogs / whatevers distributed over a wide deployment.

GolfPopper a minute ago | parent | next [-]

I have yet to see a comparison of human vs. LLM confabulation errors at scale.

"Many small errors" makes a presumption about LLM confabulation/hallucination that seems unwarranted. Pre-LLM humans (and our computers) have managed vast nuclear arsenals, bioweapons research, and ubiquitous global transport - as a few examples - without any catastrophic mistakes, so far. What can we reasonably expect as a likely worst case scenario if LLMs replacing all the relevant expertise and execution?

xmprt 3 minutes ago | parent | prev [-]

There's a difference between 1000 diverse humans with varied traits making errors that should cancel out because of the law of large numbers vs 10 AI with the same training data making errors that would likely correlate and compound upon each other.

root_axis 2 minutes ago | parent | prev | next [-]

> Some people point at LLMs confabulating, as if this wasn’t something humans are already widely known for doing.

I think we need to start rejecting anthropomorphic statements like this out of hand. They are lazy, typically wrong, and are always delivered as a dismissive defense of LLM failure modes. Anything can be anthropomorphized, and it's always problematic to do so - that's why the word exists.

This rhetorical technique always follows the form of "this LLM behavior can be analogized in terms of some human behavior, thus it follows that LLMs are human-like" which then opens the door to unbounded speculation that draws on arbitrary aspects of human nature and biology to justify technical reasoning.

In this case, you've deliberately conflated a technical term of art (LLM confabulation) with the the concept of human memory confabulation and used that as a foundation to argue that confabulation is thus inherent to intelligence. There is a lot that's wrong with this reasoning, but the most obvious is that it's a massive category error. "Confabulation" in LLMs and "confabulation" in humans have basically nothing in common, they are comparable only in an extremely superficial sense. To then go on to suggest that confabulation might be inherent to intelligence isn't even really a coherent argument because you've created ambiguity in the meaning of the word confabulate.

bee_rider 17 minutes ago | parent | prev | next [-]

We shouldn’t try to build a worse version of a human. We should try to build a better compiler and encyclopedia.

Frieren an hour ago | parent | prev | next [-]

> Some people point at LLMs confabulating

No. LLMs do not confabulate they bullshit. There is a big difference. AIs do not care, cannot care, have not capacity to care about the output. String tokens in, string tokes out. Even if they have all the data perfectly recorded they will still fail to use it for a coherent output.

> Collapsing the dimensionality is going to be lossy, which means it will have gaps between what it thinks is the reality and what is.

Confabulation has to do with degradation of biological processes and information storage.

There is no equivalent in a LLM. Once the data is recorded it will be recalled exactly the same up to the bit. A LLM representation is immutable. You can download a model a 1000 times, run it for 10 years, etc. and the data is the same. The closes that you get is if you store the data in a faulty disk, but that is not why LLMs output is so awful, that would be a trivial problem to solve with current technology. (Like having a RAID and a few checksums).

stronglikedan 41 minutes ago | parent | next [-]

I don't even think they bullshit, since that requires conscious effort that they do not an cannot possess. They just simply interpret things incorrectly sometimes, like any of us meatbags.

thayne 24 minutes ago | parent [-]

They make incorrect predictions of text to respond to prompts.

The neat thing about LLMs is they are very general models that can be used for lots of different things. The downside is they often make incorrect predictions, and what's worse, it isn't even very predictable to know when they make incorrect predictions.

knowaveragejoe an hour ago | parent | prev | next [-]

> No. LLMs do not confabulate they bullshit. There is a big difference. AIs do not care, cannot care, have not capacity to care about the output. String tokens in, string tokes out. Even if they have all the data perfectly recorded they will still fail to use it for a coherent output.

Isn't "caring" a necessary pre-requisite for bullshitting? One either bullshits because they care, or don't care, about the context.

marssaxman 39 minutes ago | parent [-]

They're presumably referring to the Harry Frankfurt definition of bullshit: "speech intended to persuade without regard for truth. The liar cares about the truth and attempts to hide it; the bullshitter doesn't care whether what they say is true or false."

simianwords an hour ago | parent | prev [-]

You seem confident. Can you get it to bullshit on GPT-5.4 thinking? Use a text prompt spanning 3-4 pages and lets see if it gets it wrong.

I haven't seen any counter examples, so you may give some examples to start with.

zeroonetwothree an hour ago | parent | prev | next [-]

And is that considered a feature of humans or a bug?

Is it something we want to emulate?

margalabargala an hour ago | parent [-]

The suggestion is that it is an intrinsic quality and therefore neither a feature nor a bug.

It's like saying, computation requires nonzero energy. Is that a feature or a bug? Neither, it's irrelevant, because it's a physical constant of the universe that computation will always require nonzero energy.

If confabulation is a physical constant of intelligence, then like energy per computation, all we can do is try to minimize it, while knowing it can never go to zero.

FloorEgg an hour ago | parent | prev | next [-]

Yes, and to me the evolution of life sure looks like an evolution of more truthful models of the universe in service of energy profit. Better model -> better predictions -> better profit.

I'm extremely skeptical that all of life evolved intelligence to be closer to truth only for us to digitize intelligence and then have the opposite happen. Makes no sense.

telephone3 an hour ago | parent [-]

My understanding is that this is the opposite of what is typically understood to be true - organisms with less truthful (more reductive/compressed) perception survive better than those with more complete perception. "Fitness beats truth."

throwaway27448 33 minutes ago | parent | prev | next [-]

Humans can be reasoned with, though, and are capable of learning.

nothinkjustai an hour ago | parent | prev | next [-]

It’s a failure mode of humans, it’s the entire mode of LLMs.

sillyfluke an hour ago | parent | prev | next [-]

If you want to call it that, I find the confabulation in LLMs extreme. That level of confabulation would most likely be diagnosed as dementia in humans.[0] Hence, it is considered a bug not a feature in humans as well.

Now imagine a high-skilled software engineer with dementia coding safety-critical software...

[0] https://www.medicalnewstoday.com/articles/confabulation-deme...

delusional an hour ago | parent | prev | next [-]

> Some people point at LLMs confabulating, as if this wasn’t something humans are already widely known for doing.

Are you seriously making the argument that AI "hallucinations" are comparable and interchangeable to mistakes, omissions and lies made by humans?

You understand that calling AI errors "hallucinations" and "confabulations" is a metaphor to relate them to human language? The technical term would be "mis-prediction", which suddenly isn't something humans ever do when talking, because we don't predict words, we communicate with intent.

AIorNot an hour ago | parent | prev [-]

Yes see Karl Frisstons Free energy principle

https://www.nature.com/articles/nrn2787

erichocean 2 minutes ago | parent | prev | next [-]

> Models do not (broadly speaking) learn over time. They can be tuned by their operators, or periodically rebuilt with new inputs or feedback from users and experts. Models also do not remember things intrinsically: when a chatbot references something you said an hour ago, it is because the entire chat history is fed to the model at every turn. Longer-term “memory” is achieved by asking the chatbot to summarize a conversation, and dumping that shorter summary into the input of every run.

This is the part of the article that will age the fastest, it's already out-of-date in labs.

nomdep 34 minutes ago | parent | prev | next [-]

"As LLMs etc. are deployed in new situations, and at new scale, there will be all kinds of changes in work, politics, art, sex, communication, and economics."

For an article five years in the making, this is what I expected it to be about. Instead, we got a ramble about how imperfect LLMs are right now.

nathell 14 minutes ago | parent | next [-]

The post is just a prelude to a 10-part article, most of which is not yet released (but will be shortly). Judging by the table of contents, the things you expected will be elaborated on in subsequent parts.

nomdep 2 minutes ago | parent [-]

That changes it. I missed that the table of contents was for other future articles, my bad.

52-6F-62 20 minutes ago | parent | prev [-]

> Instead, we got a ramble about how imperfect LLMs are right now.

I wager this is a point that needs beaten into the common psyche. After all, it's been sold that it is not an imperfect tool, but the solution to all of our problems in every field forever. That's why these companies need billions upon billions of dollars of public subsidies and investments that would otherwise find their way to more pragmatic ends.

bstsb an hour ago | parent | prev | next [-]

if you can’t access the page through region blocks:

https://archive.ph/I5cAE

_dwt an hour ago | parent | prev | next [-]

I have a question for all the "humans make those mistakes too" people in this thread, and elsewhere: have you ever read, or at least skimmed a summary of, "The Origin of Consciousness in the Breakdown of the Bicameral Mind"? Did you say "yeah, that sounds right"? Do you feel that your consciousness is primarily a linguistic phenomenon?

I am not trying to be snarky; I used to think that intelligence was intrinsically tied to or perhaps identical with language, and found deep and esoteric meaning in religious texts related to this (i.e. "in the beginning was the Word"; logos as soul as language-virus riding on meat substrate).

The last ~three years of LLM deployment have disabused me of this notion almost entirely, and I don't mean in a "God of the gaps" last-resort sort of way. I mean: I see the output of a purely-language-based "intelligence", and while I agree humans can make similar mistakes/confabulations, I overwhelmingly feel that there is no "there" there. Even the dumbest human has a continuity, a theory of the world, an "object permanence"... I'm struggling to find the right description, but I believe there is more than language manipulation to intelligence.

(I know this is tangential to the article, which is excellent as the author's usually are; I admire his restraint. However, I see exemplars of this take all over the thread so: why not here?)

xandrius 34 minutes ago | parent | next [-]

It feels like you probably went too deep in the LLM bandwagon.

An LLM is a statistical next token machine trained on all stuff people wrote/said. It blends texts together in a way that still makes sense (or no sense at all).

Imagine you made a super simple program which would answer yes/no to any questions by generating a random number. It would get things right 50% of the times. You can them fine-tune it to say yes more often to certain keywords and no to others.

Just with a bunch of hardcoded paths you'd probably fool someone thinking that this AI has superhuman predictive capabilities.

This is what it feels it's happening, sure it's not that simple but you can code a base GPT in an afternoon.

simianwords 32 minutes ago | parent [-]

If it were not "just a statistical next token machine", how different would it behave?

Can you find an example and test it out?

xandrius 14 minutes ago | parent [-]

Wait, you're asking to find and produce a example of a feasible and better alternative to LLMs when they are the current forefront of AI technology?

Anyway, just to play along, if it weren't just a statistical next token machine, the same question would have always the same answer and not be affected by a "temperature" value.

simianwords 12 minutes ago | parent [-]

Thats also how humans behave.. I don't see how non determinism tells me anything.

My question was a bit different: if were not just a statistical next token predictor would you expect it to answer hard questions? Or something like that. What's the threshold of questions you want it to answer accurately.

nine_k 35 minutes ago | parent | prev | next [-]

If you look at different ancient traditions, you will notice how they struggle with the limitations of language, with its inability to represent certain things that are not just crucial for understanding the world, but also are even somehow communicable. Buddhists dug into that in a very analytical, articulate way, for instance.

Another perspective: cetaceans are considered to be as conscious as humans, but any attempts to interpret their communication as a language failed so far. They can be taught simple languages to communicate with humans, as can be chimps. But apparently it's not how they process the world inside.

gbgarbeb 10 minutes ago | parent [-]

You're a little out of date. Cetaceans communicate images to each other in the form of ultrasonic chirps. They chirp, they hear a reflection, and they repeat the reflection.

stavros 33 minutes ago | parent | prev | next [-]

I think there are two types of discussions, when it comes to LLMs: Some people talk about whether LLMs are "human" and some people talk about whether LLMs are "useful" (ie they perform specific cognitive tasks at least as well as humans).

Both of those aspects are called "intelligence", and thus these two groups cannot understand each other.

delusional 39 minutes ago | parent | prev [-]

> I'm struggling to find the right description

I think you're circling the concept of a "soul". It is the reason that, in non-communicative disabled people, we still see a life.

I've wanted to make an art piece. It would be a chatbox claiming to connect you to the first real intelligence, but that intelligence would be non-communicative. I'd assure you that it is the most intelligent being, that it had a soul, but that it just couldn't write back.

Intelligence and Soul is not purely measurable phenomenon. A man can do nothing but stupid things, say nothing but outright lies, and still be the most intelligent person. Intelligence is within.

Kuyawa an hour ago | parent | prev | next [-]

And the past too, if we've been paying attention

embedding-shape an hour ago | parent | prev | next [-]

> In general, ML promises to be profoundly weird. Buckle up.

I love that it ends with such a positive note, even though it's generally a critical article, at least it's well reasoned and not utterly hyping/dooming something.

Thanks yet again Kyle!

nisegami 15 minutes ago | parent | prev | next [-]

Here's the opening paragraph of chapter 2 with "people" subbed out for terms referring AI/models/etc.

"People are chaotic, both in isolation and when working with other people or with systems. Their outputs are difficult to predict, and they exhibit surprising sensitivity to initial conditions. This sensitivity makes them vulnerable to covert attacks. Chaos does not mean people are completely unstable; most people behave roughly like anyone else. Since people produce plausible output, errors can be difficult to detect. This suggests that human systems are ill-suited where verification is difficult or correctness is key. Using people to write code (or other outputs) may make systems more complex, fragile, and difficult to evolve."

To me, this modified paragraph reads surprisingly plainly. The wording is off ("using people to write code") and I had to change that part about attractor behavior (although it does still apply IMO), but overall it doesn't seem like an incoherent paragraph.

This is not meant to dunk on the author, but I think it highlights the author's mindset and the gap between their expectations and reality.

busterarm 11 minutes ago | parent [-]

Aren't you also making a large part of the author's point for him by effectively equating LLMs with people here and comparing on outputs?

Plausibly your text looks equivalent but we all (should) have the context to know better.

PaulDavisThe1st an hour ago | parent | prev | next [-]

While the economic, energy, political and social issues associated with LLMs ought to be enough to nix the adoption that their boosters are seeking ...

... I still think there is an interesting question to be investigated about whether, by building immensely complex models of language, one of our primary ways that we interact with, reason about and discuss the world, we may not have accidentally built something with properties quite different than might be guessed from the (otherwise excellent) description of how they work in TFA.

I agree with pretty much everything in TFA, so this is supplemental to the points made there, not contesting them or trying to replace them.

perching_aix an hour ago | parent | prev | next [-]

This is like all the usual anti-LLM talking points and sentiments fused together.

Doesn't it get boring?

I like using these models a lot more than I stand hearing people talk about them, pro or contra. Just slop about slop. And the discussions being artisanal slop really doesn't make them any better.

Every time I hear some variation of bullshitting or plagiarizing machines, my eyes roll over. Do these people think they're actually onto something? I've been seeing these talking points for literal years. For people who complain about no original thoughts, these sure are some tired ones.

masfuerte 42 minutes ago | parent | next [-]

Why do you insist on reading and commenting on these articles that bore you so much?

stavros 28 minutes ago | parent [-]

Because saying "this is boring, let's stop talking about it" is an opinion worthwhile of expression.

simianwords 40 minutes ago | parent | prev | next [-]

Its usual gibberish that tries to throw many darts and see what sticks. Oh LLM's steal other people's work? Check. Oh LLM's cause ecological damage? Check. Oh LLM's hallucinate? Check.

When you see a pattern like this, you know that its not coming from any place of truth but rather ideology

giraffe_lady 41 minutes ago | parent | prev | next [-]

"These arguments may be correct but they aren't novel" ??

simianwords 39 minutes ago | parent [-]

I don't think calling AI a bullshit machine is correct. In spirit.

stavros 26 minutes ago | parent | prev [-]

Yeah, it gets really boring. Whenever I see "slot machines" or "bullshit machines" or whatever, I just ignore the comment and move on, because it signals that it's someone in such deep denial that they've turned their brain off.

I'd much rather read articles about what LLMs can/can't do, or stuff people have built with LLMs, than read how everything LLMs touch turns to shit.

slopinthebag 31 minutes ago | parent | prev | next [-]

Great series of articles, thank you. It's exhausting reading a deluge of (often AI generated) comments from people claiming wild things about LLM's, and it's nice to hear some sanity enter the conversation.

ambicapter an hour ago | parent | prev | next [-]

The recent article of Sam Altman described pretty much as a compulsive liar. Would it be any surprise if his most impactful contribution to the world was a machine that compulsively lies?

embedding-shape an hour ago | parent | next [-]

How could it be that we humans hardly even agree on what "knowledge" truly is, yet somehow this machine learning algorithm somehow "compulsively lies"? How would it even know what is a lie, and how could something lacking autonomy in the first place do anything compulsively?

quantummagic an hour ago | parent [-]

This is a good point. As much as there is too much breathless enthusiasm for AI, there is also a lot of emotionally manipulative and hyperbolic language used by skeptics. We're warned not to anthropomorphize, and then hear about AI's compulsive lying, or "hallucinations", in the next.

sph an hour ago | parent | prev [-]

He sought to create God in his image, that's a narcissist's wet dream.

bensyverson 2 hours ago | parent | prev | next [-]

I get the frustration, but it's reductive to just call LLMs "bullshit machines" as if the models are not improving. The current flagship models are not perfect, but if you use GPT-2 for a few minutes, it's incredible how much the industry has progressed in seven years.

It's true that people don't have a good intuitive sense of what the models are good or bad at (see: counting the Rs in "strawberry"), but this is more a human limitation than a fundamental problem with the technology.

the_snooze 2 hours ago | parent | next [-]

Two things can be true at the same time: The technology has improved, and the technology in its current state still isn't fit for purpose.

I stress test commercially deployed LLMs like Gemini and Claude with trivial tasks: sports trivia, fixing recipes, explaining board game rules, etc. It works well like 95% of the time. That's fine for inconsequential things. But you'd have to be deeply irresponsible to accept that kind of error rate on things that actually matter.

The most intellectually honest way to evaluate these things is how they behave now on real tasks. Not with some unfalsifiable appeal to the future of "oh, they'll fix it."

hedgehog an hour ago | parent | next [-]

The errors are also not distributed in the same way as you'd expect from a human. The tools can synthesize a whole feature in a moderately complicated web app including UI code, schema changes, etc, and it comes out perfectly. Then I ask for something simple like a shopping list of windshield wipers etc for the cars and that comes out wildly wrong (like wrong number of wipers for the cars, not just the wrong parts), stuff that a ten year old child would have no trouble with. I work in the field so I have a qualitative understanding of this behavior but I think it can be extremely confusing to many people.

bensyverson an hour ago | parent | prev | next [-]

> the technology in its current state still isn't fit for purpose.

This is a broad statement that assumes we agree on the purpose.

For my purpose, which is software development, the technology has reached a level that is entirely adequate.

Meanwhile, sports trivia represents a stress test of the model's memorized world knowledge. It could work really well if you give the model a tool to look up factual information in a structured database. But this is exactly what I meant above; using the technology in a suboptimal way is a human problem, not a model problem.

the_snooze an hour ago | parent [-]

There's nothing in these models that say its purpose is software development. Their design and affordances scream out "use me for anything." The marketing certainly matches that, so do the UIs, so do the behaviors. So I take them at their word, and I see that failure modes are shockingly common even under regular use. I'm not out to break these things at all. I'm being as charitable and empirical as I can reasonably be.

If the purpose is indeed software development with review, then there's nothing stopping multi-billion dollar companies from putting friction into these sytems to direct users towards where the system is at its strongest.

nradov 41 minutes ago | parent [-]

The LLM vendors are selling tokens. Why would they put friction into selling more tokens? Caveat emptor.

jerf an hour ago | parent | prev | next [-]

One of the reasons I'm comfortable using them as coding agents is that I can and do review every line of code they generate, and those lines of code form a gate. No LLM-bullshit can get through that gate, except in the form of lines of code, that I can examine, and even if I do let some bullshit through accidentally, the bullshit is stateless and can be extracted later if necessary just like any other line of code. Or, to put it another way, the context window doesn't come with the code, forming this huge blob of context to be carried along... the code is just the code.

That exposes me to when the models are objectively wrong and helps keep me grounded with their utility in spaces I can check them less well. One of the most important things you can put in your prompt is a request for sources, followed by you actually checking them out.

And one of the things the coding agents teach me is that you need to keep the AIs on a tight leash. What is their equivalent in other domains of them "fixing" the test to pass instead of fixing the code to pass the test? In the programming space I can run "git diff *_test.go" to ensure they didn't hack the tests when I didn't expect it. It keeps me wondering what the equivalent of that is in my non-programming questions. I have unit testing suites to verify my LLM output against. What's the equivalent in other domains? Probably some other isolated domains here and there do have some equivalents. But in general there isn't one. Things like "completely forged graphs" are completely expected but it's hard to catch this when you lack the tools or the understanding to chase down "where did this graph actually come from?".

The success with programming can't be translated naively into domains that lack the tooling programmers built up over the years, and based on how many times the AIs bang into the guardrails the tools provide I would definitely suggest large amounts of skepticism in those domains that lack those guardrails.

nradov 44 minutes ago | parent | prev | next [-]

Which things actually matter? I think we can all agree that an LLM isn't fit for purpose to control a nuclear power plant or fly a commercial airliner. But there's a huge spectrum of things below that. If an LLM trading error causes some hedge fund to fail then so what? It's only money.

floren an hour ago | parent | prev | next [-]

Six months bro, we're still so early

simianwords an hour ago | parent | prev [-]

> I stress test commercially deployed LLMs like Gemini and Claude with trivial tasks: sports trivia, fixing recipes, explaining board game rules, etc. It works well like 95% of the time. That's fine for inconsequential things. But you'd have to be deeply irresponsible to accept that kind of error rate on things that actually matter.

95% is not my experience and frankly dishonest.

I have ChatGPT open right now, can you give me examples where it doesn't work but some other source may have got it correct?

I have tested it against a lot of examples - it barely gets anything wrong with a text prompt that fits a few pages.

> The most intellectually honest way to evaluate these things is how they behave now on real tasks

A falsifiable way is to see how it is used in real life. There are loads of serious enterprise projects that are mostly done by LLMs. Almost all companies use AI. Either they are irresponsible or you are exaggerating.

Lets be actually intellectually honest here.

qsera 20 minutes ago | parent [-]

>95% is not my experience and frankly dishonest.

Quite frankly, this is exactly like how two people can use the same compression program on two different files and get vastly different compression ratios (because one has a lot of redundancy and the other one has not).

simianwords 19 minutes ago | parent [-]

I'm asking for a single example.

qsera 17 minutes ago | parent [-]

But why do you need an example? Isn't it pretty well understood that LLMS will have trouble responding to stuff that is under represented in the training data?

You will just won't have any clue what that could be.

simianwords 15 minutes ago | parent [-]

fair so it must be easy to give an example? I have ChatGPT open with 5.4-thinking. I'm honestly curious about what you can suggest since I have not been able to get it to bullshit easily.

qsera 6 minutes ago | parent [-]

I am not the OP, an I have only used ChatGPT free version. Last day I asked it something. It answered. Then I asked it to provide sources. Then it provided sources, and also changed its original answer. When I checked the new answers it was wrong, and when I checked sources, it didn't actually contain the information that I asked for, and thus it hallucinated the answers as well as the sources...

Arainach 2 hours ago | parent | prev | next [-]

Whether LLMs can create correct content doesn't matter. We've already seen how they are being used and will be used.

Fake content and lies. To drive outrage. To influence elections. To distract from real crimes. To overload everyone so they're too tired to fight or to understand. To weaken the concept that anything's true so that you can say anything. Because who cares if the world dies as long as you made lots of money on the way.

danny_codes an hour ago | parent [-]

> Because who cares if the world dies as long as you made lots of money on the way.

Guiding principle of the AI industry

gdulli an hour ago | parent [-]

It's really the whole tech industry as it exists right now and AI is a victim of bad timing. If this AI had been invented 40 years ago there'd have been a lower ceiling on the damage it could do.

Another way of saying that is that capitalism is the real problem, but I was never anti-capitalist in principle, it's just gotten out of hand in the last 5-10 years. (Not that it hadn't been building to that.)

palmotea 22 minutes ago | parent [-]

> Another way of saying that is that capitalism is the real problem, but I was never anti-capitalist in principle, it's just gotten out of hand in the last 5-10 years. (Not that it hadn't been building to that.)

Capitalism is a tool and it's fine as a tool, to accomplish certain goals while subordinated to other things. Unfortunately it's turned into an ideology (to the point it's worshiped idolatrously by some), and that's where things went off the rails.

gdulli an hour ago | parent | prev | next [-]

Computer graphics have been improving for decades but the uncanny valley remains undefeated. I don't know why anyone expects a breakthrough in other areas. There's a wall we hit and we don't understand our own consciousness and effectiveness well enough to replicate it.

PaulKeeble an hour ago | parent | next [-]

In computer graphics we understand how it works, we just lack the computational power to do it real time, but we can with sufficient processing produce realistic looking images with physically accurate lighting. But when it comes to cognition its a lot of guesswork, we haven't yet mapped out the neuron connections in a brain, we haven't validated it works as popular science writing suggests. We don't understand intelligence, so all we can do is accidentally bumble into it and it seems unlikely that will just happen especially when its so hard to compute what we are already doing.

kritiko an hour ago | parent | prev [-]

We have credible deepfakes on demand. (To be fair, there have been deceptive photos as long as photos have existed, but the cost of automating their creation going to basically zero has a social impact)

zdragnar 2 hours ago | parent | prev | next [-]

That's not why the author calls them bullshit machines.

> One way to understand an LLM is as an improv machine. It takes a stream of tokens, like a conversation, and says “yes, and then…” This yes-and behavior is why some people call LLMs bullshit machines. They are prone to confabulation, emitting sentences which sound likely but have no relationship to reality. They treat sarcasm and fantasy credulously, misunderstand context clues, and tell people to put glue on pizza.

Yes, there have been improvements on them, but none of those improvements mitigate the core flaw of the technology. The author even acknowledges all of the improvements in the last few months.

p_stuart82 an hour ago | parent | prev | next [-]

models are improving. the pricing already assumes they're ready for prod. that's where the fires start

karmakaze an hour ago | parent | prev | next [-]

Bullshit is the perfect term here, even as AI's get so much better and capable Brandolini's Law aka the "bullshit asymmetry principle" always applies--the energy required to refute misinformation is an order of magnitude larger than that needed to produce it. Even to use AIs effectively today requires a very good BS detector--some day in the future it won't.

ura_yukimitsu an hour ago | parent | prev | next [-]

Calling LLMs "bullshit machines" is a reference to a 2024 paper [1] which itself uses the concept of "bullshit" as defined in the essay/book "On Bullshit" by Harry G. Frankfurt [2]. The TL;DR is that LLMs are fundamentally bullshit machines because they are only made to generate sentences that sound plausible, but plausible does not always mean true.

[1]: https://link.springer.com/article/10.1007/s10676-024-09775-5

[2]: https://en.wikipedia.org/wiki/On_Bullshit

mcpar-land an hour ago | parent | prev | next [-]

it's not a bullshit machine because its output is bad, it's a bullshit machine because its output is literally 'bullshit' as in, output that is statistically likely but with no factual or reasoning basis. as the models have improved, their bullshit is more statistically likely to sound coherent (maybe even more likely to be 'accurate'), but no more factual and with no more reasoning.

4ndrewl an hour ago | parent | prev | next [-]

It doesn't matter how good the models become. They can only deal in bullshit, in the academic use of the term.

Scaevolus 2 hours ago | parent | prev | next [-]

They are bullshit machines because they do not have an internal mental model of truth like a human does. The flagship models bullshit less, but their fundamental architectures prevent having truth interfere with output.

https://philosophersmag.com/large-language-models-and-the-co...

bensyverson an hour ago | parent [-]

"Bullshit" is a human concept. LLMs do not work like the human brain, so to call their output "bullshit" is ascribing malice and intent that is simply not there. LLMs do not "think." But that does not mean they're not incredibly powerful and helpful in the right context.

slopinthebag 14 minutes ago | parent [-]

I sort of agree. In this context "bullshit" means "speech intended to persuade without regard for truth", and while it's true that LLM output is without regard for truth, it's not an entity capable of the agency to persuade, although functionally that is what it can appear like.

https://en.wikipedia.org/wiki/On_Bullshit

ajross 2 hours ago | parent | prev [-]

> it's reductive to just call LLMs "bullshit machines" as if the models are not improving

This is true, but I prefer to think of it as "It's delusional to pretend as if human beings are not bullshit machines too".

Lies are all we have. Our internal monologue is almost 100% fantasy. Even in serious pursuits, that's how it works. We make shit up and lie to ourselves, and then only later apply our hard-earned[1] skill prompts to figure out whether or not we're right about it.

How many times have the nerds here been thinking through a great new idea for a design and how clever it would be before stopping to realize "Oh wait, that won't work because of XXX, which I forgot". That's a hallucination right there!

[1] Decades of education!

kolektiv an hour ago | parent | next [-]

I'm not entirely sure I can agree, although the premise is seductive in certain ways. We do lie to ourselves, but we also have meta-cognition - we can recognise our own processes of thought. Imperfect as it may be, we have feedback loops which we can choose to use, we have heuristics we can apply, we can consciously alter our behaviour in the presence of contextual inputs, and so on.

Being wrong is not the same as a hallucination. It's a natural step on a journey to being more right. This feels a bit like Andreesen proudly stating he avoids reflection - you can act like that, but the human brain doesn't have to. LLMs have no choice in the matter.

iamjackg 2 hours ago | parent | prev | next [-]

The problem, unfortunately, is the scale. It's always scale. Humans make all the kinds of mistakes that we ascribe to LLMs, but LLMs can make them much faster and at much larger scale.

Models have gotten ridiculously better, they really have, but the scale has increased too, and I don't think we're ready to deal with the onslaught.

SkyBelow an hour ago | parent [-]

Scale is very different, but I wonder if human trust isn't the real issue. We trust technology too much as a group. We expect perfection, but we also assume perfection. This might be because the machines output confident sounding answers and humans default to trusting confidence as an indirect measure for accuracy, but I think there is another level where people just blindly trust machines because they are so use to using them for algorithms that trend towards giving correct responses.

Even before LLMs where in the public's discourse, I would have business ask about using AI instead of building some algorithm manually, and when I asked if they had considered the failure rate, they would return either blank stares or say that would count as a bug. To them, AI meant an algorithm just as good as one built to handle all edge cases in business logic, but easier and faster to implement.

We can generally recognize the AIs being off when they deal in our area of expertise, but there is some AI variant of Gell-Mann Amnesia at play that leads us to go back to trusting AI when it gives outputs in areas we are novices in.

nyeah an hour ago | parent | prev | next [-]

"Lies are all we have."

If so, how do we distinguish between code that works and code that doesn't work? Why should we even care?

ajross 31 minutes ago | parent [-]

> If so, how do we distinguish between code that works and code that doesn't work?

Hilariously, not by using our brains, that's for sure. You have to have an external machine. We all understand that "testing" and "code review" are different processes, and that's why.

nyeah 9 minutes ago | parent [-]

Good point. We choose certain tests to perform. We choose certain test results to pay attention to. We don't just keep chatting about (reviewing) the code. We do something else.

If lies are all we have, then how is this behavior possible?

ajross a few seconds ago | parent [-]

LLMs can write and run tests though.

You're cherry picking my little bit of wordsmithing. Obviously we aren't always wrong. I'm saying that our thought processes stem from hallucinatory connections and are routinely wrong on first cut, just like those of an LLM.

Actually I'm going farther than that and saying that the first cut token stream out of an AI is significantly more reliable than our personal thoughts. Certainly than mine, and I like to think I'm pretty good at this stuff.

nothinkjustai 41 minutes ago | parent | prev | next [-]

So your logic is humans and LLMs are the same because humans are wrong sometimes?

ajross 30 minutes ago | parent [-]

Pretty much, yeah. Or rather, the fact that we're both reliably wrong in identifiably similar ways makes "we're more alike than different" an attractive prior to me.

nothinkjustai 17 minutes ago | parent [-]

“More alike than different” is reasonable I think, as long as we’re talking about how we have some of the same failure modes. Although the way we get there is quite different.

I’m still not a big fan of comparing humans and LLMs because LLMs lack so much of what actually makes us human. We might bullshit or be wrong because of many reasons that just don’t apply to LLMs.

AnimalMuppet an hour ago | parent | prev [-]

Humans are different. Humans - at least thoughtful humans - know the difference between knowing something and not knowing something. Humans are capable of saying "I don't know" - not just as a stream of tokens, but really understanding what that means.

ajross 28 minutes ago | parent [-]

> Humans - at least thoughtful humans - know the difference between knowing something and not knowing something.

Your no-true-scotsman clause basically falsifies that statement for me. Fine, LLMs are, at worst I guess, "non-thoughtful humans". But obviously LLMs are right an awful lot (more so than a typical human, even), and even the thoughtful make mistakes.

So yeah, to my eyes "Humans are NOT different" fits your argument better than your hypothesis.

(Also, just to be clear: LLMs also say "I don't know", all the time. They're just prompted to phrase it as a criticism of the question instead.)

josefritzishere an hour ago | parent | prev | next [-]

I appreciate the directness of calling LLMs "Bullshit machines." This terminology for LLMs is well established in academic circles and is much easier for laypeople to understand than terms like "non-deterministic." I personally don't like the excessive hype on the capabilities of AI. Setting realistic expectations will better drive better product adoption than carpet bombing users with marketing.

AStrangeMorrow an hour ago | parent | next [-]

I have still mixed feelings about LLMs.

If I take the example of code, but that extends to many domains, it can sometimes produce near perfect architecture and implementation if I give it enough details about the technical details and fallpits. Turning a 8h coding job into a 1h review work.

On the other hand, it can be very wrong while acting certain it is right. Just yesterday Claude tried gaslighting me into accepting that the bug I was seeing was coming from a piece of code with already strong guardrails, and it was adamant that the part I was suspecting could in no way cause the issue. Turns out I was right, but I was starting to doubt myself

slopinthebag 9 minutes ago | parent [-]

I think over time we will find better usage patterns for these machines. Even putting a model in a position to gaslight the user seems like a complete failure in the usage model. Not critiquing you at all on this, it's how these models are marketed and what all the tooling is built around. But they are incredibly useful and I think once we figure out how to use them better we can minimise these downsides and make ourselves much more productive without all the failures.

Of course that won't happen until the bubble pops - companies are racing to make themselves indispensable and to completely corner certain markets and to do so they need autonomous agents to replace people.

simianwords 44 minutes ago | parent | prev [-]

If it bullshits so much, you wouldn't have a problem giving me an example of it bullshitting on ChatGPT (paid version)? Lets take any example of a text prompt fitting a few pages - it may be a question in science or math or any domain. Can you get it to bullshit?

bitwize an hour ago | parent | prev | next [-]

The fact that these "bullshit machines" have already proven themselves relatively competent at programming, with upcoming frontier models coming close to eliminating it as a human activity, probably says a lot about the actual value and importance of programming in the scheme of things.

slopinthebag 12 minutes ago | parent [-]

I think it says more about the amount of automation we left on the table in the last few decades. So much of the code LLM's can generate are stuff that we should have completely abstracted away by now.

LogicFailsMe 33 minutes ago | parent | prev [-]

Old and stupid hot take IMO. I want the time back I put into perusing this. Even the scale of LLMs is puny next to the scale of lying humans and the sheer impact one compulsively lying human can have given we love to be led by confidently wrong narcissists. I mean if that isn't obvious by now, I guess it never will be. The Vogon constructor fleet is way overdue in my book.

52-6F-62 14 minutes ago | parent [-]

> The Vogon constructor fleet is way overdue in my book

Don't you see it? That's exactly what "AI" in this context is.

It's the bypass.

Where does it end, eh? Build a quantum "AI" that will end up just needing more data, more input. The end goal must starts looking like creating an entirely new universe, a complete clone of everything we have here so it can run all the necessary computations and we can... ? (You are what a quantum AI looks like as it bumbles through the infinitude of calculable parameters on its way to the ultimate answer)