Remix.run Logo
distalx 4 hours ago

A transmission error has a strictly contained, predictable blast radius. If a packet drops, the system knows exactly how to handle it: it throws a timeout, drops a connection, or asks for a retry. The worst-case scenario is known.

A reasoning error has an infinite, unpredictable blast radius. When an LLM hallucinates, it doesn't fail safely but it writes perfectly compiling code that does the wrong thing. That "wrong thing" might just render a button incorrectly, or it might silently delete your production database, or open a security backdoor.

You can build reliable abstractions over failures that are predictable and contained. You cannot abstract away unpredictable destruction.

harrall an hour ago | parent | next [-]

A transmission error does not have a strictly contained blast radius.

A bad packet could tell a flying probe to fire all thrusters on and deplete its fuel in 15 minutes.

What makes a transmission error controlled is all the protection mechanisms on top of it. An LLM cannot delete a production database unless you give it access to do it.

My hot take is that many people are naturally more comfortable with deterministic systems that have clearly analyzable outcomes. Software engineering has historically primarily been oriented around deterministic systems and it has attracted that type of thinker.

But many of us, myself included, prefer chaotic systems where you can’t fully nail down every cause and effect. The challenge of building a prediction model on top of chaos is exhilarating. I really don’t find many people like me in SWE as in, say, the graphics design department.

To me, that’s the underlying threat here — LLMs are rewriting a field that has previously self selected a certain type of person and this, quite understandably, rubs them the wrong way.

c-linkage an hour ago | parent | next [-]

I don't need to be able to write proofs about my maths using logic and determinism. If the answer comes out in a way that I like then it has to be correct!

dpark 28 minutes ago | parent [-]

This is vapid condescension.

The comment you replied to made no statements about math or proofs. They made a statement about working in systems of non determinism effectively. Your statement seems to imply that this is dumb, as if working in a world of full determinism is an option.

panarky 13 minutes ago | parent | next [-]

Thank you for "vapid condescension".

I've wanted a term for this for decades!

19 minutes ago | parent | prev [-]
[deleted]
aeon_ai an hour ago | parent | prev [-]

Insightful.

Feels like this maps to the J/P of Myers Briggs

yunwal 4 hours ago | parent | prev | next [-]

> A reasoning error has an infinite, unpredictable blast radius.

Says who? It’s quite easy to limit the blast radius of a reasoning error.

distalx 2 hours ago | parent | next [-]

In 2024, a Chevy dealership deployed an AI chatbot that confidently agreed to sell a customer a 2024 Chevy Tahoe for $1. It executed a catastrophic business failure simply because it didn't know the logic was wrong.

Sure, you can patch that specific case with guardrails, but how many unpredictable edge cases are you going to cover? It only takes a user with a bit of ingenuity to circumvent them. There are already several examples of AI agents getting stuck in infinite loops, burning through massive API bills while achieving absolutely nothing.

You can contain a system failure, but you cannot contain a logic failure if the system doesn't know the logic is wrong.

pear01 33 minutes ago | parent [-]

This would be more convincing if a single car had been exchanged for $1.

It didn't happen. Seems the bug was "contained".

Sort of undermines your point re "catastrophic business failure" don't you think?

amazingamazing 3 hours ago | parent | prev [-]

How so?

Suppose you had:

Math() Add() Subtract()

Program() Math(“calculate rate”)

This is intentionally written vaguely. How do you limit that these implementations ensure Program() runs and does the right thing when there is no guarantee Math() or its components are correct?

Normally you could use a typed programming language, unit tests, etc, but if LLM is the ultimate abstraction programs will be written line above. At some point traditional software engineering principles will need to apply.

td2 4 hours ago | parent | prev [-]

I mean if your talking about packets, your already one abstraction over the real data Transmission, in wich is noisy. So bits can randomly flip, noise could be interpreted as bits, and bits could get lost. A much larger blast radius

22 minutes ago | parent [-]
[deleted]