| |
| ▲ | benterix 5 days ago | parent | next [-] | | > The vast majority of programmers don't know assembly, so can in fact not work without all the abstractions they rely on. The problem with this analogy is obvious when you imagine an assembler generating machine code that doesn't work half of the time and a human trying to correct that. | | |
| ▲ | vidarh 5 days ago | parent | next [-] | | An abstraction doesn't cease to be one because it's imperfect, or even wrong. | |
| ▲ | nerdsniper 5 days ago | parent | prev [-] | | I mean, it’s more like 0.1% of the time but I’ve definitely had to do this in embedded programming on ARM Cortex M0-M3. Sometimes things just didn't compile the way I expected. My favorite was when I smashed the stack and I overflowed ADC readings into the PC and SP, leading to the MCU jumping completely randomly all over the codebase. Other times it was more subtle things, like optimizing away some operation that I needed to not be optimized away. |
| |
| ▲ | maltalex 5 days ago | parent | prev | next [-] | | > Do you therefore argue programming languages aren't abstractions? Yes, and no.
They’re abstractions in the sense of hiding the implementation details of the underlying assembly. Similarly, assembly hides the implementation details of the cpu, memory, and other hw components. However, except with programming languages you don’t need to know the details of the underlying layers except for very rare cases. The abstraction that programming languages provide is simple, deterministic, and well documented. So, in 99.999% of cases, you can reason based on the guarantees of the language, regardless of how those guarantees are provided.
With LLMs, the relation between input and output is much more loose. The output is non-deterministic, and tiny changes to the input can create enormous changes in the output seemingly without reason. It’s much shakier ground to build on. | | |
| ▲ | impure-aqua 4 days ago | parent | next [-] | | I do not think determinism of behaviour is the only thing that matters for evaluating the value of an abstraction - exposure to the output is also a consideration. The behaviour of the = operator in Python is certainly deterministic and well-documented, but depending on context it can result in either a copy (2x memory consumption) or a pointer (+64bit memory consumption). Values that were previously pointers can also suddenly become copies following later permutation. Do you think this through every time you use =? The consequences of this can be significant (e.g. operating on a large file in memory); I have seen SWEs make errors in FastAPI multipart upload pipelines that have increased memory consumption by 2x, 3x, in this manner. Meanwhile I can ask an LLM to generate me Rust code, and it is clearly obvious what impact the generated code has on memory consumption. If it is a reassignment (b = a) it will be a move, and future attempts to access the value of a would refuse to compile and be highlighted immediately in an IDE linter. If the LLM does b = &a, it is clearly borrowing, which has the size of a pointer (+64bits). If the LLM did b = a.clone(), I would clearly be able to see that we are duplicating this data structure in memory (2x consumption). The LLM code certainly is non-deterministic; it will be different depending on the questions I asked (unlike a compiler). However, in this particular example, the chosen output format/language (Rust) directly exposes me to the underlying behaviour in a way that is both lower-level than Python (what I might choose to write quick code myself) yet also much, much more interpretable as a human than, say, a binary that GCC produces. I think this has significant value. | |
| ▲ | 5 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | lock1 4 days ago | parent | prev [-] | | Unrelated to the gp post, but isn't LLMs more like a deterministic chaotic system than a "non-deterministic" one? "Tiny changes to the input can change the output quite a lot" is similar to "extreme sensitivity to initial condition" property of a chaotic system. I guess that could be a problematic behavior if you want reproducibility ala (relatively) reproducible abstraction like compilers. With LLMs, there are too many uncontrollable variables to precisely reproduce a result from the same input. |
| |
| ▲ | WD-42 5 days ago | parent | prev | next [-] | | The vast majority of programmers could learn assembly, most of it in a day. They don’t need to, because the abstractions that generate it are deterministic. | |
| ▲ | strix_varius 5 days ago | parent | prev | next [-] | | This is a tautology. At some level, nobody can work at a lower level of abstraction. A programmer who knows assembly probably could not physically build the machine it runs on. A programmer who could do that probably could not smelt the metals required to make that machine. etc. However, the specific discussion here is about delegating the work of writing to an LLM, vs abstracting the work of writing via deterministic systems like libraries, frameworks, modules, etc. It is specifically not about abstracting the work of compiling, constructing, or smelting. | | |
| ▲ | vidarh 5 days ago | parent [-] | | This is meaningless. An LLM is also deterministic if configured to be so, and any library, framework, module can be non-deterministic if built to be. It's not a distinguishing factor. | | |
| ▲ | strix_varius 5 days ago | parent [-] | | That isn't how LLMs work. They are probabilistic. Running them on even different hardware yields different results. And the deltas compound the longer your context and the more tokens you're using (like when writing code). But more importantly, always selecting the most likely token traps the LLM in loops, reduces overall quality, and is infeasible at scale. There are reasons that literally no LLM that you use runs deterministically. | | |
| ▲ | vidarh a day ago | parent [-] | | With temperature set to zero, they are deterministic if inference is implemented with deterministic calculations. Only when you turn the temperature up they become probabilistic for a given input in that case. If you take shortcuts in implementing the inference, then sure, rounding errors may accumulate and prevent that, but that is not an issue with the models but with your choice of how to implement the inference. |
|
|
| |
| ▲ | robenkleene 5 days ago | parent | prev [-] | | Fair point, I elaborated what I mean here https://news.ycombinator.com/item?id=45116976 To address your specific point in the same way: When we're talking about programmers using abstractions, we're usually not talking about the programming language their using, we're talking about the UI framework, networking libraries, etc... they're using. Those are the APIs their calling with their code, and those are all abstractions that are all implemented at (roughly) the same level of abstraction as the programmer's day-to-day work. I'd expect a programmer to be able to re-implement those if necessary. |
|
| |
| ▲ | robenkleene 5 days ago | parent [-] | | Note, I'm not saying there are never situations where you'd delegate something that you can do yourself (the whole concept of apprenticeship is based on doing just that). Just that it's not an expectation, e.g., you don't expect a CEO to be able to do the CTO's job. I guess I'm not 100% sure I agree with my original point though, should a programmer working on JavaScript for a website's frontend be able to implement a browser engine. Probably not, but the original point I was trying to make is I would expect a programmer working on a browser engine to be able to re-implement any abstractions that they're using in their day-to-day work if necessary. | | |
| ▲ | AnIrishDuck 5 days ago | parent | next [-] | | The advice I've seen with delegation is the exact opposite. Specifically: you can't delegate what you can't do. Partially because of all else fails, you'll need to step in and do the thing. Partially because if you can't do it, you can't evaluate whether it's being done properly. That's not to say you need to be _as good_ at the task as the delegee, but you need to be competent. For example, this HBR article [1]. Pervasive in all advice about delegation is the assumption that you can do the task being delegated, but that you shouldn't. > Just that it's not an expectation, e.g., you don't expect a CEO to be able to do the CTO's job. I think the CEO role is actually the outlier here. I can only speak to engineering, but my understanding has always been that VPs need to be able to manage individual teams, and engineering managers need to be somewhat competent if there's some dev work that needs to be done. This only happens as necessary, and it obviously should be rare. But you get in trouble real quickly if you try to delegate things you cannot accomplish yourself. 1. https://hbr.org/2025/09/why-arent-i-better-at-delegating | |
| ▲ | tguedes 5 days ago | parent | prev | next [-] | | I think what you're trying to reference is APIs or libraries, most of which I wouldn't consider abstractions. I would hope most senior front-end developers are capable of developing a date library for their use case, but in almost all cases it's better to use the built in Date class, moment, etc. But that's not an abstraction. | |
| ▲ | meheleventyone 5 days ago | parent | prev [-] | | There's an interesting comparison in delegation where for example people that stop programming through delegation do lose their skills over time. |
|
|