Remix.run Logo
123malware321 3 hours ago

well considering you use components like DFA to build compilers, yes they are determenistic. you also have reproducible builds etc.

or does your binary always come out differently each time you compile the same file??

You can try it. try to compile the same file 10 times and diff the resultant binaries.

Now try to prompt a bunch of LLMs 10 times and diff the returned rubbish.

sigbottle 3 hours ago | parent [-]

I think one of the best ways to understand the "nice property" of compilers we like isn't necessarily determinacy, but "programming models".

There's this really good blog post about how autovectorization is not a programming model https://pharr.org/matt/blog/2018/04/18/ispc-origins

The point is that you want to reliably express semantics in the top level language, tool, API etc. because that's the only way you can build a stable mental model on top of that. Needing to worry about if something actually did something under the hood is awful.

Now of course, that depends on the level of granularity YOU want. When writing plain code, even if it's expressively rich in the logic and semantics (e.g. c++ template metaprogramming), sometimes I don't necessarily care about the specific linker and assembly details (but sometimes I do!)

The issue I think is that building a reliable mental model of an LLM is hard. Note that "reliable" is the key word - consistent. Be it consistently good or bad. The frustrating thing is that it can sometimes deliver great value and sometimes brick horribly and we don't have a good idea for the mental model yet.

To constrain said possibility space, we tether to absolute memes (LLMs are fully stupid or LLMs are a superset of humans).

Idk where I'm going with this