| As someone who's worked on this kind of stuff at GM, I don't really get the exuberance in this particular space (not just the comment I'm responding to). If you want 100% coverage, you just autogenerate the test cases. LLMs can't properly check MISRA requirements, so they're really just a layer on top existing automated checkers. Same for complexity metrics, it doesn't get merged if it violates that (or it's a vendor dependency you won't touch anyway). If you care about the spirit of the rules, they're not that big a difference. If you don't care, there are already ways to do this. In either case they're an incremental change, not what I'd call a godsend. |
| |
| ▲ | Insanity 11 hours ago | parent | next [-] | | If you wouldn’t let AI run your nuclear power plant, you need to drink more of the AI kool-aid. | | |
| ▲ | 2ndorderthought 11 hours ago | parent [-] | | All these luddites out here. The cats out of the bag. Get with the program. Give the AI nukes already. | | |
| ▲ | xigoi 6 hours ago | parent [-] | | You’re absolutely right! I shouldn’t have launched the nukes. Would you like to learn more about nuclear safety? |
|
| |
| ▲ | anonym29 11 hours ago | parent | prev [-] | | >AI can't be held accountable I hear this all the time. Why does it matter? Punishing a human for making a mistake does not prevent mistakes, nor does it undo the harm of the mistake. A human saying "my bad, I messed up" and an AI saying "my bad, I messed up" are equally worthless, in a functional sense. | | |
| ▲ | forgetfreeman 11 hours ago | parent | next [-] | | "Punishing a human for making a mistake does not prevent mistakes" This statement suggests you don't believe in some combination of neuroplasticity as a concept or the arrow of time. | | |
| ▲ | anonym29 11 hours ago | parent [-] | | Tell the families of the people who died on 737 MAX disasters. "Don't worry - everything's going to be okay! The engineers learned from their mistakes - accountability works, you have nothing to be sad about!" Tell the family of the person killed by a semi truck driver who showed up to work drunk or high: "Don't worry - the driver went to jail! Accountability prevented anything bad from happening!" Accountability alone fails to prevent deadly mistakes millions of times a day; millions of mistakes are avoided daily through process, redundancy, independent review, and formal methods. "Accountability prevents mistakes" is a comforting delusion. In reality, accountability is only marginally related to whether or not mistakes are made. | | |
| ▲ | russelldjimmy 35 minutes ago | parent | next [-] | | What are you even on about mate? Sure accountability doesn’t prevent all mistakes. Guess what, nothing prevents all mistakes. Accountability can help prevent some mistakes some of the time. It sounds like you’re suggesting getting rid of the concept of accountability because it doesn’t prevent ALL mistakes. Way to throw the baby out with the bath water. | |
| ▲ | forgetfreeman 6 hours ago | parent | prev [-] | | "Accountability alone fails to prevent deadly mistakes millions of times a day" ...in his desperation to finally win an argument online our hero advanced, grimly ignoring the concept of Engineering. "millions of mistakes are avoided daily through process, redundancy, independent review, and formal methods." Ahh spoke too soon, Engineering has finally joined the chat. So what mechanism do you propose lead to the foundation of process, redundancy, independent review, and formal methods? |
|
| |
| ▲ | bigstrat2003 8 hours ago | parent | prev | next [-] | | Punishing humans does, in fact, prevent mistakes. Or rather, the threat of punishment causes people to be careful to avoid mistakes, and that prevents mistakes. Sure, this doesn't work 100% of the time, but it does work and has throughout human history. Meanwhile, there's no equivalent paradigm for LLMs. | | |
| ▲ | anonym29 33 minutes ago | parent [-] | | Even if you could threaten an LLM with punishment for making mistakes, you might get longer CoTs, but that wouldn't prevent mistakes in LLMs. The lack of accountability isn't the reason that LLMs make mistakes - adding accountability wouldn't change anything. |
| |
| ▲ | legostormtroopr 11 hours ago | parent | prev [-] | | If a human messes up enough eventually they well get fired, fined or jailed. An AI will not. A human also knows they might get punished if it messes up bad enough, which might cause it to think twice before doing something bad. For an AI there is a reward, but there is no risk. So while both might lie, only the human will be worried that it will be found out. That makes a difference. | | |
| ▲ | beardbandit 11 hours ago | parent | next [-] | | There is a human in the loop that either prompted the agent or approved the code. So it doesn't matter if the AI is accountable or not. | | |
| ▲ | the_af 11 hours ago | parent [-] | | I hear you, but isn't the human in the loop precisely the one who should be putting their foot down and saying "no, the AI shouldn't be writing the tests to begin with", which would bring us full circle? |
| |
| ▲ | anonym29 11 hours ago | parent | prev [-] | | You say that like all humans are alike: that they all care about getting fired, fined, or jailed; that they're even considering punishment when they're making their decisions; that risk factors into decision making. What you are describing is a hypothetical "rational person". In real life, even the most rational people you know do completely irrational things routinely. The Therac-25 engineers were accountable. The 737 MAX engineers were accountable. Accountability is doing much less work in the safety story than you seem to think. The real work is done by process, redundancy, independent review, formal methods. None of these inherently require someone to be penalized for making mistakes, and penalizing people for making mistakes is a demonstrably, empirically unreliable mechanism for preventing mistakes. |
|
|
|