| ▲ | e3bc54b2 5 days ago |
| As the other comment said, LLMs are not an abstraction. An abstraction is a deterministic, pure function, than when given A always returns B. This allows the consumer to rely on the abstraction. This reliance frees up the consumer from having to implement the A->B, thus allowing it to move up the ladder. LLMs, by their very nature are probabilistic. Probabilistic is NOT deterministic. Which means the consumer is never really sure if given A the returned value is B. Which means the consumer now has to check if the returned value is actually B, and depending on how complex A->B transformation is, the checking function is equivalent in complexity as implementing the said abstraction in the first place. |
|
| ▲ | stuartjohnson12 5 days ago | parent | next [-] |
| It's delegation then. We can use different words if you like (and I'm not convinced that delegation isn't colloquially a form of abstraction) but you can't control the world by controlling the categories. |
| |
| ▲ | hoppp 5 days ago | parent | next [-] | | Delegation of intelligence?
So one party gets more stupid for the other to be smart? | | |
| ▲ | arduanika 5 days ago | parent | next [-] | | Yes, just like moving into management. So we'll get a generation of programmers who get to turn prematurely into the Pointy-Haired Boss. | | |
| ▲ | ThrowawayR2 4 days ago | parent [-] | | I like that. To paraphrase the Steinbeck (mis)quote: "Hacker culture never took root in the AI gold rush because the LLM 'coders' saw themselves not as hackers and explorers, but as temporarily understaffed middle-managers." | | |
| |
| ▲ | benterix 5 days ago | parent | prev | next [-] | | Except that (1) the other party doesn't become smart, (2) the one who delegates doesn't become stupid, it just loses the opportunity to become smarter when compared to a human who'd actually do the work. | | |
| ▲ | soraminazuki 5 days ago | parent | next [-] | | You're in denial. (1) The other party keeps learning, (2) the article cites evidence showing that heavy AI use causes cognitive decline. | | |
| ▲ | casey2 4 days ago | parent [-] | | The evidence it cites is that paper from 3 months ago claiming your brain activates less while prompting than actually writing an essay.
No duh, the point is that you flex your mental muscles on the tasks AI can't do, like effective organization. I don't need to make a pencil to write. The most harmful myth in all of education is the idea that you need to master some basic building blocks in order to move on to a higher level. That really is just a noticeable exception. At best you can claim that it's difficult for other people to realize that your new way solves the problem, or that people should really learn X because it's generally useful. I don't see the need for this kind of compulsory education, and it's doing much more harm than good. Bodybuilding doesn't even appear as a codified sport until well after the industrial revolution, it's not until we are free of sustenance labor that human intelligence will peak. Who would be happy with a crummy essay if humans could learn telekinesis? | | |
| ▲ | soraminazuki 4 days ago | parent | next [-] | | That's a lot of words filled with straw man analogies. Essentially, you're claiming that you can strengthen your cognitive skills by having LLMs do all the thinking for you, which is absurd. And the fact that the study is 3 months old doesn't invalidate the work. > Who would be happy with a crummy essay if humans could learn telekinesis? I'm glad that's not the professional consensus on education, at least for now. And "telekinesis," really? | |
| ▲ | bigbadfeline 4 days ago | parent | prev [-] | | > No duh, the point is that you flex your mental muscles on the tasks AI can't do, like effective organization. AI can do better organization than you, it's only inertia and legalities that prevent it from happening. See, without good education, you aren't even able to find a place for yourself. > The most harmful myth in all of education is the idea that you need to master some basic building blocks in order to move on to a higher level. That "myth" is supported by abundant empirical evidence, people have tried education without it and it didn't work. My lying eyes kind of confirm it too, I had one hell of time trying to use LLM without getting dumber... it comes so natural to them, skipping steps is seductive but blinding. > I don't see the need for this kind of compulsory education, and it's doing much more harm than good. Again, long standing empirical evidence tells as the opposite. I support optional education but we can't even have a double blind study for it - I'm pretty sure those who don't go to school would be home-schooled, too few are dumb enough to let their uneducated children chose their manner and level of education. |
|
| |
| ▲ | lazystar 5 days ago | parent | prev [-] | | well, then it comes down to which skillset is more marketable - the delegator, or the codong language expert. customers dont care about the syntactic sugar/advanced reflection in the codebase of the product that theyre buying. if the end product of the delegator and the expert is the same, employers will go with the faster one every time. | | |
| ▲ | ModernMech 5 days ago | parent [-] | | That's how you end up in the Idiocracy world, where things still happen, but they are driven by ads rather than actual need, no one really understands how anything works, somehow society plods along due to momentum, but it's all shit from top to bottom and nothing is getting better. "Brawndo: it's got what plants crave!" is the end result of being lead around by marketers. | | |
| ▲ | lazystar 3 days ago | parent [-] | | isnt this what assembly devs would have said about c devs, and c devs abput python devs? |
|
|
| |
| ▲ | charcircuit 5 days ago | parent | prev [-] | | It's not 0 sum. All parties can become more intelligent over time. | | |
| ▲ | matt_kantor 5 days ago | parent [-] | | They could, but you're commenting on a study whose results indicate that this isn't what happens. | | |
| ▲ | charcircuit 5 days ago | parent [-] | | And you are in a comment chain discussing how there is a subset of people where the study is not true. | | |
| ▲ | dvfjsdhgfv 5 days ago | parent | next [-] | | Rather a subset of people who would like to believe the results don't apply to them. Frankly, I'm sure there will be much more studies in this direction. Now this is a university, an independent organization. But, given the amount of money involved, some of future studies will come from the camp vitally interested in people believing that by outsourcing their work to coding agents they are becoming smarter instead of losing achieved skills. Looking forward to reading the first of these. | | |
| ▲ | charcircuit 5 days ago | parent [-] | | Outsourcing work doesn't make you smarter. It makes you more productive. It gives you extra time that you can dedicate towards becoming smarter at something else. | | |
| ▲ | soraminazuki 5 days ago | parent [-] | | Become smarter at what exactly? People reliant on AI aren't going to use AI on just one thing, they're going to use it for everything. Besides, as others have pointed out to you, the study shows evidence that AI reliance causes cognitive decline. It affects your general intelligence, not limited to a single area of expertise. > Students who repeatedly relied on ChatGPT showed weakened neural connectivity, impaired memory recall, and diminished sense of ownership over their own writing So we're going to have more bosses, perhaps not in title, who think they're becoming more knowledgeable about a broad range of topics, but are actually in cognitive decline and out of touch with reality on the ground. Great. |
|
| |
| ▲ | beeflet 5 days ago | parent | prev [-] | | There is? You haven't proven anything | | |
| ▲ | rstuart4133 5 days ago | parent [-] | | Haven't you been paying attention? He probably heard it from an AI. That's the only proof needed. Why he put in any more effort? /s |
|
|
|
|
| |
| ▲ | robenkleene 5 days ago | parent | prev | next [-] | | One argument for abstraction being different from delegation, is when a programmer uses an abstraction, I'd expect the programmer to be able to work without the abstraction, if necessary, and also be able to build their own abstractions. I wouldn't have that expectation with delegation. | | |
| ▲ | vidarh 5 days ago | parent | next [-] | | The vast majority of programmers don't know assembly, so can in fact not work without all the abstractions they rely on. Do you therefore argue programming languages aren't abstractions? | | |
| ▲ | benterix 5 days ago | parent | next [-] | | > The vast majority of programmers don't know assembly, so can in fact not work without all the abstractions they rely on. The problem with this analogy is obvious when you imagine an assembler generating machine code that doesn't work half of the time and a human trying to correct that. | | |
| ▲ | vidarh 5 days ago | parent | next [-] | | An abstraction doesn't cease to be one because it's imperfect, or even wrong. | |
| ▲ | nerdsniper 5 days ago | parent | prev [-] | | I mean, it’s more like 0.1% of the time but I’ve definitely had to do this in embedded programming on ARM Cortex M0-M3. Sometimes things just didn't compile the way I expected. My favorite was when I smashed the stack and I overflowed ADC readings into the PC and SP, leading to the MCU jumping completely randomly all over the codebase. Other times it was more subtle things, like optimizing away some operation that I needed to not be optimized away. |
| |
| ▲ | maltalex 5 days ago | parent | prev | next [-] | | > Do you therefore argue programming languages aren't abstractions? Yes, and no.
They’re abstractions in the sense of hiding the implementation details of the underlying assembly. Similarly, assembly hides the implementation details of the cpu, memory, and other hw components. However, except with programming languages you don’t need to know the details of the underlying layers except for very rare cases. The abstraction that programming languages provide is simple, deterministic, and well documented. So, in 99.999% of cases, you can reason based on the guarantees of the language, regardless of how those guarantees are provided.
With LLMs, the relation between input and output is much more loose. The output is non-deterministic, and tiny changes to the input can create enormous changes in the output seemingly without reason. It’s much shakier ground to build on. | | |
| ▲ | impure-aqua 4 days ago | parent | next [-] | | I do not think determinism of behaviour is the only thing that matters for evaluating the value of an abstraction - exposure to the output is also a consideration. The behaviour of the = operator in Python is certainly deterministic and well-documented, but depending on context it can result in either a copy (2x memory consumption) or a pointer (+64bit memory consumption). Values that were previously pointers can also suddenly become copies following later permutation. Do you think this through every time you use =? The consequences of this can be significant (e.g. operating on a large file in memory); I have seen SWEs make errors in FastAPI multipart upload pipelines that have increased memory consumption by 2x, 3x, in this manner. Meanwhile I can ask an LLM to generate me Rust code, and it is clearly obvious what impact the generated code has on memory consumption. If it is a reassignment (b = a) it will be a move, and future attempts to access the value of a would refuse to compile and be highlighted immediately in an IDE linter. If the LLM does b = &a, it is clearly borrowing, which has the size of a pointer (+64bits). If the LLM did b = a.clone(), I would clearly be able to see that we are duplicating this data structure in memory (2x consumption). The LLM code certainly is non-deterministic; it will be different depending on the questions I asked (unlike a compiler). However, in this particular example, the chosen output format/language (Rust) directly exposes me to the underlying behaviour in a way that is both lower-level than Python (what I might choose to write quick code myself) yet also much, much more interpretable as a human than, say, a binary that GCC produces. I think this has significant value. | |
| ▲ | 5 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | lock1 4 days ago | parent | prev [-] | | Unrelated to the gp post, but isn't LLMs more like a deterministic chaotic system than a "non-deterministic" one? "Tiny changes to the input can change the output quite a lot" is similar to "extreme sensitivity to initial condition" property of a chaotic system. I guess that could be a problematic behavior if you want reproducibility ala (relatively) reproducible abstraction like compilers. With LLMs, there are too many uncontrollable variables to precisely reproduce a result from the same input. |
| |
| ▲ | WD-42 5 days ago | parent | prev | next [-] | | The vast majority of programmers could learn assembly, most of it in a day. They don’t need to, because the abstractions that generate it are deterministic. | |
| ▲ | strix_varius 5 days ago | parent | prev | next [-] | | This is a tautology. At some level, nobody can work at a lower level of abstraction. A programmer who knows assembly probably could not physically build the machine it runs on. A programmer who could do that probably could not smelt the metals required to make that machine. etc. However, the specific discussion here is about delegating the work of writing to an LLM, vs abstracting the work of writing via deterministic systems like libraries, frameworks, modules, etc. It is specifically not about abstracting the work of compiling, constructing, or smelting. | | |
| ▲ | vidarh 5 days ago | parent [-] | | This is meaningless. An LLM is also deterministic if configured to be so, and any library, framework, module can be non-deterministic if built to be. It's not a distinguishing factor. | | |
| ▲ | strix_varius 5 days ago | parent [-] | | That isn't how LLMs work. They are probabilistic. Running them on even different hardware yields different results. And the deltas compound the longer your context and the more tokens you're using (like when writing code). But more importantly, always selecting the most likely token traps the LLM in loops, reduces overall quality, and is infeasible at scale. There are reasons that literally no LLM that you use runs deterministically. | | |
| ▲ | vidarh a day ago | parent [-] | | With temperature set to zero, they are deterministic if inference is implemented with deterministic calculations. Only when you turn the temperature up they become probabilistic for a given input in that case. If you take shortcuts in implementing the inference, then sure, rounding errors may accumulate and prevent that, but that is not an issue with the models but with your choice of how to implement the inference. |
|
|
| |
| ▲ | robenkleene 5 days ago | parent | prev [-] | | Fair point, I elaborated what I mean here https://news.ycombinator.com/item?id=45116976 To address your specific point in the same way: When we're talking about programmers using abstractions, we're usually not talking about the programming language their using, we're talking about the UI framework, networking libraries, etc... they're using. Those are the APIs their calling with their code, and those are all abstractions that are all implemented at (roughly) the same level of abstraction as the programmer's day-to-day work. I'd expect a programmer to be able to re-implement those if necessary. |
| |
| ▲ | Jensson 5 days ago | parent | prev [-] | | > I wouldn't have that expectation with delegation. Managers tend to hire sub managers to manage their people. You can see this with LLM as well, people see "Oh this prompting is a lot of work, lets make the LLM prompt the LLM". | | |
| ▲ | robenkleene 5 days ago | parent [-] | | Note, I'm not saying there are never situations where you'd delegate something that you can do yourself (the whole concept of apprenticeship is based on doing just that). Just that it's not an expectation, e.g., you don't expect a CEO to be able to do the CTO's job. I guess I'm not 100% sure I agree with my original point though, should a programmer working on JavaScript for a website's frontend be able to implement a browser engine. Probably not, but the original point I was trying to make is I would expect a programmer working on a browser engine to be able to re-implement any abstractions that they're using in their day-to-day work if necessary. | | |
| ▲ | AnIrishDuck 5 days ago | parent | next [-] | | The advice I've seen with delegation is the exact opposite. Specifically: you can't delegate what you can't do. Partially because of all else fails, you'll need to step in and do the thing. Partially because if you can't do it, you can't evaluate whether it's being done properly. That's not to say you need to be _as good_ at the task as the delegee, but you need to be competent. For example, this HBR article [1]. Pervasive in all advice about delegation is the assumption that you can do the task being delegated, but that you shouldn't. > Just that it's not an expectation, e.g., you don't expect a CEO to be able to do the CTO's job. I think the CEO role is actually the outlier here. I can only speak to engineering, but my understanding has always been that VPs need to be able to manage individual teams, and engineering managers need to be somewhat competent if there's some dev work that needs to be done. This only happens as necessary, and it obviously should be rare. But you get in trouble real quickly if you try to delegate things you cannot accomplish yourself. 1. https://hbr.org/2025/09/why-arent-i-better-at-delegating | |
| ▲ | tguedes 5 days ago | parent | prev | next [-] | | I think what you're trying to reference is APIs or libraries, most of which I wouldn't consider abstractions. I would hope most senior front-end developers are capable of developing a date library for their use case, but in almost all cases it's better to use the built in Date class, moment, etc. But that's not an abstraction. | |
| ▲ | meheleventyone 5 days ago | parent | prev [-] | | There's an interesting comparison in delegation where for example people that stop programming through delegation do lose their skills over time. |
|
|
| |
| ▲ | hosh 5 days ago | parent | prev [-] | | There is a form of delegation that develops the people involved, so that people can continue to contribute and grow. Each individual can contribute what is unique to them, and grow more capable as they do so. Both people, and the community of those people remain alive, lively, and continue to grow. Some people call this paradigm “regenerative”; only living systems regenerate. There is another form of delegation where the work needed to be done is imposed onto another, in order to exploit and extract value. We are trying to do this with LLMs now, but we also did this during the Industrial Revolution, and before that, humanity enslaved each other to get the labor to extract value out of the land. This value extraction leads to degeneration, something that happens when living systems dies. While the Industrial Revolution afforded humanity a middle-class, and appeared to distribute the wealth that came about — resulting in better standards of living — it came along with numerous ills that as a society, we still have not really figured out. I think that, collectively, we figure that the LLMs can do the things no one wants to do, and so _everyone_ can enjoy a better standard of living. I think doing it this way, though, leads to a life without purpose or meaning. I am not at all convinced that LLMs are going to give us back that time … not unless we figure out how to develop AIs that help grow humans instead of replacing them. The following article is an example of what I mean by designing an AI that helps develop people instead of replacing them: https://hazelweakly.me/blog/stop-building-ai-tools-backwards... | | |
| ▲ | salawat 5 days ago | parent [-] | | LLM's and AI in general is just a hack to reimplement slavery with an artificial being that is denied consideration as a being. Technical chattel, if you will, and if you've been paying attention in tech circles a lot of mental energy is being funneled into keeping the egghead's attention firmly in the "we don't want something that is" direction. Investors want robots that won't/can't say no. | | |
| ▲ | ModernMech 5 days ago | parent [-] | | What's interesting about this proposition, is that by the time you create a machine that's as capable in the way they want to replace humans, we'll have to start talking about robot personhood, because by then they will be indistinguishable from us. I don't think you can get the kinds of robots they want without also inventing the artificial equivalent of soul. So their whole moral sidestep to reimplement slavery won't even work. Enslaving sapient beings is evil whether they are made of meat or metal. | | |
| ▲ | salawat 5 days ago | parent [-] | | You are far too optimistic in terms of willingness of the moneyed to let something like a toaster having theoretical feelings get in the way of their Santa Claus machines. | | |
| ▲ | ModernMech 5 days ago | parent [-] | | Seeing as they call us NPCs, I'm pretty sure they think all our feelings are theoretical. |
|
|
|
|
|
|
| ▲ | TheOtherHobbes 5 days ago | parent | prev | next [-] |
| Human developers by their very nature are probabilistic. Probabilistic is NOT deterministic. Which means the manager is never really sure if the developer solved the problem, or if they introduced some bugs, or if their solution is robust and ideal even when it seems to be working. All of which is beside the point, because soon-ish LLMs are going to develop their own equivalents of experimentation, formalisation of knowledge, and collective memory, and then solutions will become standardised and replicable - likely with a paradoxical combination of a huge loss of complexity and solution spaces that are humanly incomprehensible. The arguments here are like watching carpenters arguing that a steam engine can't possibly build a table as well as they can. Which - is you know - true. But that wasn't how industrialisation worked out. |
|
| ▲ | threatofrain 5 days ago | parent | prev | next [-] |
| So it's a noisy abstraction. Programmers deal with that all the time. Whenever you bring in an outside library or dependency there's an implicit contract that you don't have to look underneath the abstraction. But it's noisy so sometimes you do. Colleagues are the same thing. You may abstract business domains and say that something is the job of your colleague, but sometimes that abstraction breaks. Still good enough to draw boxes and arrows around. |
| |
| ▲ | delfinom 5 days ago | parent | next [-] | | Noisy is an understatement, it's buggy, it's error filled, it's time consuming and inefficient. It's exact opposite of automation but great for job security. | | |
| ▲ | soraminazuki 5 days ago | parent [-] | | It's unfortunately not great for job security either. Do you know how Google massively underinvests in support? Their support is mostly automated and is only good at shooing people away. Many companies would jump at the opportunity to adopt AI and accept massive declines in quality as long as it results in cost savings. Working people and customers will get screwed hard. |
| |
| ▲ | soraminazuki 5 days ago | parent | prev [-] | | Competent programmers use well established libraries and dependencies, not ones that are unreliable as LLMs. |
|
|
| ▲ | Paradigma11 5 days ago | parent | prev | next [-] |
| "LLMs, by their very nature are probabilistic." So are humans and yet people pay other people to write code for them. |
| |
| ▲ | const_cast 5 days ago | parent | next [-] | | Yes but we don't call humans abstractions. A software engineer isn't an abstraction over code. | | |
| ▲ | threatofrain 5 days ago | parent [-] | | No, but depending on your governance structure, we have software engineers abstract over domains. And then we draw boxes and arrows around the works of your colleagues without looking inside the box. | | |
| ▲ | skydhash 5 days ago | parent [-] | | You wish! Bus factor risk is why you don’t do this. Having siloed knowledge is one of the first steps towards engineering, unless someone else code is proven bug free, you don’t usually rely on that. You just have someone to throw bug tickets at. | | |
| ▲ | threatofrain 5 days ago | parent | next [-] | | Very true, my brain is stuck in scaling out from small teams. In that world, you can't help but accept plenty of bus factor, and once you get to enough people making sure everyone understands each others' domains is a bit too much. | |
| ▲ | skydhash 5 days ago | parent | prev [-] | | EDIT *towards bad engineering, unless* |
|
|
| |
| ▲ | benterix 5 days ago | parent | prev | next [-] | | Yeah but in spite of that if you ask me take a Jira ticket and do it properly, there is a much higher chance that I'll do it reliably and the rest of my team will be satisfied, whereas if I bring an LLM into the equation it will wreak havoc (I've witnessed a few cases and some people got fired, not really for using LLMs but for not reviewing their output properly - which I can even understand somehow as reviewing code is much less fun than creating it). | |
| ▲ | zasz 5 days ago | parent | prev [-] | | Yeah and the people paying other people to write code won't understand how the code works. AI as currently deployed stands a strong chance of reducing the ranks of the next generation of talented devs. |
|
|
| ▲ | groby_b 5 days ago | parent | prev | next [-] |
| > An abstraction is a deterministic, pure function That must be why we talk about leaky abstractions so much. They're neither pure functions, nor are they always deterministic. We as a profession have been spoilt by mostly deterministic code (and even then, we had a chunk of probabilistic algorithms, depending on where you worked). Heck, I've worked with compilers that used simulated annealing for optimization, 2 decades ago. Yes, it's a sea change for CRUD/SaaS land. But there are plenty of folks outside of that who actually took the "engineering" part of software engineering seriously, and understand just fine how to deal with probabilistic processes and risk management. |
|
| ▲ | pmarreck 5 days ago | parent | prev | next [-] |
| > LLMs, by their very nature are probabilistic I believe that if you can tweak the temperature input (OpenAI recently turned it off in their API, I noticed), an input of 0 should hypothetically result in the same output, given the same input. |
| |
| ▲ | bdhcuidbebe 5 days ago | parent | next [-] | | That only works if you decide to stick to that exact model for the rest of your life, obviously. | | |
| ▲ | oceanplexian 5 days ago | parent [-] | | The point is he said "by its nature". A transformer based LLM when called with the same inputs/seed/etc is literally the textbook definition of a deterministic system. |
| |
| ▲ | sarchertech 5 days ago | parent | prev [-] | | No one uses temperature 0 because the results are terrible. |
|
|
| ▲ | 5 days ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | oceanplexian 5 days ago | parent | prev | next [-] |
| > LLMs, by their very nature are probabilistic. This couldn't be any more wrong. LLMs are 100% deterministic. You just don't observe that feature because you're renting it from some cloud service. Run it on your own hardware with a consistent seed, and it will return the same answer to the same prompt every time. |
| |
| ▲ | maltalex 5 days ago | parent | next [-] | | That’s like arguing that random number generators are not random if you give them a fixed seed. You’re splitting hairs. LLMs, as used in practice in 99.9% of cases, are probabilistic. | |
| ▲ | kbelder 5 days ago | parent | prev [-] | | I think 'chaotic' is a better descriptor than 'probabilistic'. It certainly follows deterministic rules, unless randomness is deliberately injected. But the interaction of the rules and the context the operate in is so convoluted that you can't trace an exact causal relationship between the input and output. | | |
| ▲ | ModernMech 5 days ago | parent [-] | | It's chaotic in general. The randomness makes it chaotic and nondeterministic. Chaotic systems aren't that bad to work with as long as they are deterministic. Chaotic + nondeterministic is like building on quicksand. |
|
|
|
| ▲ | CuriouslyC 5 days ago | parent | prev | next [-] |
| Ok, let's call it a stochastic transformation over abstraction spaces. It's basically sampling from the set of deterministic transformations given the priors established by the prompt. |
| |
| ▲ | soraminazuki 5 days ago | parent [-] | | You're bending over backwards to imply that it's deterministic without saying it is. It's not. LLMs, by its very nature, don't have a well-defined relationship between its input and output. It makes tons of mistakes that's utterly incomprehensible because of that. |
|
|
| ▲ | chermi 5 days ago | parent | prev | next [-] |
| Just want to commend you for the perfect way of describing this re. not being an abstraction |
|
| ▲ | upcoming-sesame 5 days ago | parent | prev | next [-] |
| agree but does this distinction really make a difference ? I think the OP point is still valid |
|
| ▲ | glitchc 5 days ago | parent | prev | next [-] |
| > LLMs, by their very nature are probabilistic. Probabilistic is NOT deterministic. Although I'm on the side of getting my hands dirty, I'm not sure if the difference is that different. A modern compiler embeds a considerable degree of probabilistic behaviour. |
| |
| ▲ | ashton314 5 days ago | parent | next [-] | | Compilers use heuristics which may result in dramatically different results between compiler passes. Different timing effects during compilation may constrain certain optimization passes (e.g. "run algorithm x over the nodes and optimize for y seconds") but in the end the result should still not modify defined observable behavior, modulo runtime. I consider that to be dramatically different than the probabilistic behavior we get from an LLM. | |
| ▲ | davidrupp 5 days ago | parent | prev [-] | | > A modern compiler embeds a considerable degree of probabilistic behaviour. Can you give some examples? | | |
| ▲ | hn_acc1 5 days ago | parent | next [-] | | There are pragmas you can give to a compiler to tell it to "expect that this code path is (almost) never followed". I.e. if you have an assert on nullptr, for example. You want it to assume the assert rarely gets triggered, and highly optimize instruction scheduling / memory access for the "not nullptr" case, but still assert (even if it's really, REALLY slow, relatively speaking) to handle the nullptr case. | |
| ▲ | glitchc 4 days ago | parent | prev | next [-] | | Any time the language specification is undefined, the compiler behaviour will be probabilistic at best. Here's an example for C: https://wordsandbuttons.online/so_you_think_you_know_c.html | |
| ▲ | WD-42 5 days ago | parent | prev | next [-] | | I keep hearing this but it’s a head scratcher. They might be thinking of branch prediction, but that’s a function of the cpu, not the compiler. | |
| ▲ | ModernMech 5 days ago | parent | prev [-] | | It’s not that they embed probabilistic behavior per se. But more like they are chaotic systems, in that a slight change of input can drastically change the output. But ideally, good compiler design is idempotent — given the same input, the output should always be the same. If that were not generally true, programming would be much harder than it is. | | |
| ▲ | glitchc 4 days ago | parent [-] | | No, it can also vary on the input. The -ffast-math flag in gcc is a good example. |
|
|
|
|
| ▲ | eikenberry 5 days ago | parent | prev | next [-] |
| Local models can be deterministic and that is one of the reasons why they will win out over service based models once the hardware becomes available. |
|
| ▲ | bckr 5 days ago | parent | prev | next [-] |
| The LLM is not part of the application. The LLM expands the text of your design into a full application. The commenter you’re responding to is clear that they are checking the outputs. |
|
| ▲ | RAdrien 5 days ago | parent | prev | next [-] |
| This is an excellent reply |
|
| ▲ | rajap 5 days ago | parent | prev | next [-] |
| with proper testing you can make suer that given A the returned value is B |
|
| ▲ | charcircuit 5 days ago | parent | prev | next [-] |
| >LLMs, by their very nature are probabilistic. So are compilers, but people still successfully use them. Compilers and LLMs can both be made deterministic but for performance reasons it's convenient to give up that guarantee. |
| |
| ▲ | hn_acc1 5 days ago | parent | next [-] | | AIUI, if you made an LLM deterministic, every mostly-similar prompt would return the same result (i.e. access the same training data set) and if that's wrong, the LLM is just plain broken for that example. Hacked-in "temperature" (randomness) is the only way to hopefully get a correct result - eventually. | |
| ▲ | WD-42 5 days ago | parent | prev [-] | | What are these non deterministic compilers I keep hearing about, honestly curious. | | |
| ▲ | charcircuit 5 days ago | parent | next [-] | | For example looping over the files in a directory can happen in a different order depending on the order the files were created in. If you are linking a bunch of objects the order typically matters. If the compiler is implemented correctly the resulting binary should functionally be the same but the binary itself may not be exactly the same. Or even when implemented correctly you will see cases where different objects can be the one to define a duplicate symbol depending on their relative order. | | |
| ▲ | ModernMech 5 days ago | parent [-] | | That's not nondeterminism though, you've changed the input (the order of the files). Nondeterminism would be if the binary changes despite the files being in the same order. If the binary is the same holding fixed the order of the files, then the output is deterministic. |
| |
| ▲ | PhunkyPhil 5 days ago | parent | prev [-] | | GCC can use randomized branch prediction. |
|
|
|
| ▲ | daveguy 5 days ago | parent | prev [-] |
| > An abstraction is a deterministic, pure function, than when given A always returns B. That is just not correct. There is no rule that says an abstraction is strictly functional or deterministic. In fact, the original abstraction was likely language, which is clearly neither. The cleanest and easiest abstractions to deal with have those properties, but they are not required. |
| |
| ▲ | robenkleene 5 days ago | parent | next [-] | | This is such a funny example because language is the main way that we communicate with LLMs. Which means you can make tie both of your points together in the same example: If you take a scene and describe it in words, then have an LLM reconstruct the scene from the description, you'd likely get a scene that looks very different then the original source. This simultaneous makes both your point and the person you're responding to's point: 1. Language is an abstraction and it's not deterministic (it's really lossy) 2. LLMs behave differently than the abstractions involved in building software, where normally if you gave the same input, you'd expect the same output. | | |
| ▲ | daveguy 5 days ago | parent [-] | | Yes, most abstractions are not as clean as leak free functional abstractions. Most abstractions in the world are leaky and lossy. Abstraction was around long before computers were invented. |
| |
| ▲ | beepbooptheory 5 days ago | parent | prev [-] | | What is the thing that language itself abstracts? | | |
| ▲ | fkyoureadthedoc 5 days ago | parent [-] | | Your thought's I'd say, but it's more of a two way street than what I think of as abstraction. | | |
| ▲ | daveguy 5 days ago | parent [-] | | Okay, language was the original vehicle for abstraction if everyone wants to get pedantic about it. And yes, abstraction of thought. Only in computerland (programming, mathematics and physics) do you even have the opportunity to have leak-free functional abstractions. That is not the norm. LLM-like leaky abstractions are the norm. | | |
| ▲ | beepbooptheory 5 days ago | parent [-] | | This is clearly not true. For example, the Pythagorean theorem is an old, completely leak free, abstraction with no computer required. Sorry for being pedantic, I was just curious what you mean at all. Language as abstraction of thought implies that thought is always somehow more "general" than language, right? But if that was the case, how could I read a novel that brings me to tears? Is not my thought in this case more the "lossy abstraction" of the language than the other way around? Or, what is the abstraction of the "STOP" on the stop sign at the intersection? | | |
|
|
|
|