▲ | maltalex 5 days ago | |
> Do you therefore argue programming languages aren't abstractions? Yes, and no. They’re abstractions in the sense of hiding the implementation details of the underlying assembly. Similarly, assembly hides the implementation details of the cpu, memory, and other hw components. However, except with programming languages you don’t need to know the details of the underlying layers except for very rare cases. The abstraction that programming languages provide is simple, deterministic, and well documented. So, in 99.999% of cases, you can reason based on the guarantees of the language, regardless of how those guarantees are provided. With LLMs, the relation between input and output is much more loose. The output is non-deterministic, and tiny changes to the input can create enormous changes in the output seemingly without reason. It’s much shakier ground to build on. | ||
▲ | impure-aqua 4 days ago | parent | next [-] | |
I do not think determinism of behaviour is the only thing that matters for evaluating the value of an abstraction - exposure to the output is also a consideration. The behaviour of the = operator in Python is certainly deterministic and well-documented, but depending on context it can result in either a copy (2x memory consumption) or a pointer (+64bit memory consumption). Values that were previously pointers can also suddenly become copies following later permutation. Do you think this through every time you use =? The consequences of this can be significant (e.g. operating on a large file in memory); I have seen SWEs make errors in FastAPI multipart upload pipelines that have increased memory consumption by 2x, 3x, in this manner. Meanwhile I can ask an LLM to generate me Rust code, and it is clearly obvious what impact the generated code has on memory consumption. If it is a reassignment (b = a) it will be a move, and future attempts to access the value of a would refuse to compile and be highlighted immediately in an IDE linter. If the LLM does b = &a, it is clearly borrowing, which has the size of a pointer (+64bits). If the LLM did b = a.clone(), I would clearly be able to see that we are duplicating this data structure in memory (2x consumption). The LLM code certainly is non-deterministic; it will be different depending on the questions I asked (unlike a compiler). However, in this particular example, the chosen output format/language (Rust) directly exposes me to the underlying behaviour in a way that is both lower-level than Python (what I might choose to write quick code myself) yet also much, much more interpretable as a human than, say, a binary that GCC produces. I think this has significant value. | ||
▲ | 5 days ago | parent | prev | next [-] | |
[deleted] | ||
▲ | lock1 4 days ago | parent | prev [-] | |
Unrelated to the gp post, but isn't LLMs more like a deterministic chaotic system than a "non-deterministic" one? "Tiny changes to the input can change the output quite a lot" is similar to "extreme sensitivity to initial condition" property of a chaotic system. I guess that could be a problematic behavior if you want reproducibility ala (relatively) reproducible abstraction like compilers. With LLMs, there are too many uncontrollable variables to precisely reproduce a result from the same input. |