▲ | ben_w 2 days ago | |||||||
I can and I have. Neither is "magic". And plenty of software involves real lives, real bodies, and no second chances, e.g. Therac-25. Unfortunately for all of us, it does look rather like people are already using clear-as-mud AI models for life-critical processes. | ||||||||
▲ | pyman 2 days ago | parent [-] | |||||||
You can't really compare the two. Yes, machines can (and do) fail, whether it's Therac-25, Tesla Autopilot, or Boeing's MCAS. Any software controlling a physical system carries risk. But unlike surgery, code is testable. You can run it in a sandbox, simulate edge cases, fix bugs, and repeat the process for days, months, or even years until it's stable enough for production. Surgeons don't get that luxury. They can't test a procedure on the same body before performing it. There's one shot, and the consequences are irreversible. That said, I get your point, LLMs can be unpredictable because of the huge amount of data they're trained on and the quality of that data. You never really know what patterns they'll pick up or how they'll behave in edge cases, especially when the outputs aren't deterministic. | ||||||||
|