| ▲ | yunwal 4 hours ago | |||||||
> A reasoning error has an infinite, unpredictable blast radius. Says who? It’s quite easy to limit the blast radius of a reasoning error. | ||||||||
| ▲ | distalx 2 hours ago | parent | next [-] | |||||||
In 2024, a Chevy dealership deployed an AI chatbot that confidently agreed to sell a customer a 2024 Chevy Tahoe for $1. It executed a catastrophic business failure simply because it didn't know the logic was wrong. Sure, you can patch that specific case with guardrails, but how many unpredictable edge cases are you going to cover? It only takes a user with a bit of ingenuity to circumvent them. There are already several examples of AI agents getting stuck in infinite loops, burning through massive API bills while achieving absolutely nothing. You can contain a system failure, but you cannot contain a logic failure if the system doesn't know the logic is wrong. | ||||||||
| ||||||||
| ▲ | amazingamazing 3 hours ago | parent | prev [-] | |||||||
How so? Suppose you had: Math() Add() Subtract() Program() Math(“calculate rate”) This is intentionally written vaguely. How do you limit that these implementations ensure Program() runs and does the right thing when there is no guarantee Math() or its components are correct? Normally you could use a typed programming language, unit tests, etc, but if LLM is the ultimate abstraction programs will be written line above. At some point traditional software engineering principles will need to apply. | ||||||||