|
| ▲ | pnt12 5 days ago | parent | next [-] |
| That's a bit pedantic: lots of python programs will work the same way in major OSs. If they don't, someone will likely try to debug the specific error and fix it. But LLMs frequently hallucinate in non deterministic ways. Also, it seems like there's little chance for knowledge transfer. If I work with dictionaries in python all the timrle, eventually I'm better prepared to go under the hood and understand their implementation. If I'm prompting a LLM, what's the bridge from prompt engineering to software engineering? Not such direct connection, surely! |
| |
| ▲ | theptip 5 days ago | parent [-] | | > That's a bit pedantic It's a pedantic reply to a pedantic point :) > If I'm prompting a LLM, what's the bridge from prompt engineering to software engineering? A sibling also made this point, but I don't follow. You can still read the code. If you don't know the syntax, you can ask the LLM to explain it to you. LLMs are great for knowledge transfer, if you're actually trying to learn something - and they are strongest in domains where you have an oracle to test your understanding, like code. |
|
|
| ▲ | ashton314 5 days ago | parent | prev | next [-] |
| Undefined behavior does not violate correctness. Undefined behavior is just wiggle room for compiler engineers to not have to worry so much about certain edge cases. "Correctness" must always be considered with respect to something else. If we take e.g. the C specification, then yes, there are plenty of compilers that are in almost all ways people will encounter correct according to that spec, UB and all. Yes, there are bugs but they are bugs and they can be fixed. The LLVM project has a very neat tool called Alive2 [1] that can verify optimization passes for correctness. I think there's a very big gap between the kind of reliability we can expect from a deterministic, verified compiler and the approximating behavior of a probabilistic LLM. [1]: https://github.com/AliveToolkit/alive2 |
|
| ▲ | ndsipa_pomu 5 days ago | parent | prev | next [-] |
| However, the undefined behaviours are specified and known about (or at least some people know about them). With LLMs, there's no way to know ahead of time that a particular prompt will lead to hallucinations. |
|
| ▲ | sieabahlpark 5 days ago | parent | prev [-] |
| [dead] |