| ▲ | Den_VR 3 hours ago | ||||||||||||||||||||||||||||||||||
Does the hammer lie to you that everything is a nail? Can a voltmeter _lie_ to you? EE are expected to know when their measurements are wrong. And Professional Engineers are legally accountable for consequences of such mistakes. | |||||||||||||||||||||||||||||||||||
| ▲ | possibleworlds 2 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
If a hammer had a chat interface that said everything was a nail then the answer would be yes, the hammer lies to you about everything being a nail. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | fao_ an hour ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
If software engineering wants to progress past being an "art" and be considered an engineering discipline, then it should adopt methods and practices from engineering. First and foremost, one of the universal methodologies is analysis of root cause in faults, and redundancies to avoid that. e.g. the FAA has two pilots for planes, and each system is built in redundantly so if an engineer misses a bolt or rivet, the plane won't crash. intersections are designed such that there is a forcing function[0] on the behaviour of the motorists to prevent fault. Or, to take your tool analogy, nail guns are designed to be pressed against something with a decent amount of pressure before you can fire them. All of these systems are designed around the core idea of "a human acting irrationally or improperly is not at fault" and, furthermore, that a human can have a bad day and still avoid a mistake. They all steer someone around a possible fault. Hell, the reason why we divide the road into lanes is itself a forcing function to avoid traffic collisions! So, where is the forcing function in large language models? What part of a large language model prevents gross misuse by laymen? I can think of examples here and there, maybe. OpenAI had to add guard rails to stop people from poisoning themselves with botulism and boron, etc. But the problem here is that the LLM is probabilistic, so there's really no guarantee that those guard rails will hold. I seem to remember there being a paper from a few months back, posted here, that show AI guardrails cannot be proven to work consistently. In that context, LLMs cannot be considered "safe" or "reliable" enough for use. Eddie Burback has a very, very good video showing an absolute worst case result of this[1], that was posted here last year. Even then, off the top of my head Angela Collier has a really, really good video demonstrating that there's an absolute plethora of people who have succumbed, in large ways or small, to the bullshit AI can spew[2]. I feel like if most developers were actually serious about being an engineering discipline, like we claim, then we wouldn't have all jumped on the LLM bandwagon until they'd been properly tested and had a certain level of reliability. Instead there are a sizable chunk of people saying they've stopped coding by hand entirely, and aren't even reviewing the code! i.e. They've thrown out a forcing function that existed to prevent errorenous PRs being committed! And for some bizzare reason, after about 2 decades of people talking about type safety and how we need formal verification to reduce error, everyone seems to be throwing "reduction of error" out the window! [0]: https://en.wikipedia.org/wiki/Behavior-shaping_constraint (if you're curious about the term) | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | perching_aix an hour ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
> Can a voltmeter _lie_ to you? Hell fucking yes it can? | |||||||||||||||||||||||||||||||||||