| ▲ | infinitewars 9 hours ago |
| AI is just another layer of abstraction. I'm sure the assembly language folks were grumbling about functions as being too abstracted at one point |
|
| ▲ | kmaitreys 9 hours ago | parent | next [-] |
| High level languages that replaced assembly are not black boxes. |
| |
| ▲ | kaoD 9 hours ago | parent | next [-] | | And they're as deterministic as as the underlying thing they're abstracting... which is kinda what makes an abstraction an abstraction. I get that people love saying LLMs are just compilers from human language to $OUTPUT_FORMAT but... they simply are not except in a stretchy metaphorical sense. That's only true if you reduce the definition of "compiler" to a narrow `f = In -> Out`. But that is _not_ a compiler. We have a word for that: function. And in LLM's case an impure one. | |
| ▲ | 9 hours ago | parent | prev [-] | | [deleted] |
|
|
| ▲ | makerofthings 9 hours ago | parent | prev | next [-] |
| I totally see what you're saying, but to me this feels different. Compilation is a fairly mechanical and well understood process. The large language models aren't just compiling English to assembler via your chosen language, they try and guess what you want, they add extra bits you didn't ask for, they're doing some of your solution thinking for you. That feels like more than just abstraction to me. |
| |
| ▲ | seanosaur 9 hours ago | parent [-] | | I think it's still abstraction by definition, but you're right in that it's a much larger single leap than in the past. | | |
| ▲ | kaoD 9 hours ago | parent | next [-] | | > it's still abstraction by definition I dislike arguing semantics but I bet it's not an abstraction by most engineers' definition of the word. | |
| ▲ | acedTrex 8 hours ago | parent | prev [-] | | If this is true then a PMs jira tickets are an abstraction over an engineers code. It's not necessarily wrong by some interpretations but is not how the majority of engineers would define the word. |
|
|
|
| ▲ | mananaysiempre 9 hours ago | parent | prev | next [-] |
| > AI is just another layer of abstraction. A fundamentally unreliable one: even an AI system that is entirely correctly implemented as far as any human can see can yield wrong answers and nobody can tell why. That’s not entirely the fault of the technology, as natural language just doesn’t make for reliable specs, especially in inexperienced hands, so in a sense we finally got the natural-language that some among our ancestors dreamed of and it turned out to be as unreliable as some others of our ancestors said all along. It partly is the fault of the technology, however, because while you can level all the same complaints against a human programmer, a (motivated) human will generally be much better at learning from their mistakes than the current generation of LLM-based systems. (This even if we ignore other issues, such as the fact that it leaves everybody entirely reliant on the continued support and willingness to transact of a handful of vendors in a market with a very high barrier to entry.) |
|
| ▲ | beart 9 hours ago | parent | prev | next [-] |
| AI is non-deterministic. Can it still be considered an abstraction over a deterministic layer? |
| |
| ▲ | entrox 6 hours ago | parent | next [-] | | Does it have to be? The etymology of the word „abstraction“ is „to draw away“. I think it‘s relevant to consider just how far away you want to go. If I‘m purely focused on the general outcome as written in a requirement or specification document, I‘d consider everything below that as „abstracted away“. For example, this weekend I built my own MCP server for some services I‘m hosting on my personal server (*arr, Jellyfin, …) to be integrated with claude.ai. I‘ve written down all the things I want it to do, the environment it has to work in and let Claude go. Not once have I looked at the code. And quite frankly, I don‘t care. As long as it fulfills my general requirements, it can write Python one time and TypeScript the other time should I choose to regenerate from that document. It might behave slightly differently but that is ok to a degree. From my perspective, that is an abstraction. Deterministic? No, but it also doesn‘t have to be. | |
| ▲ | kypro 8 hours ago | parent | prev | next [-] | | The argument against this is that human coders are also non-deterministic, so does it really matter if it's a human or an AI agent producing the code – assuming the AI agent is capable of producing human-quality code or better? I agree it's not a layer of abstraction in the traditional sense though. AI isn't an abstraction of existing code, it's a new way to produce code. It's an "abstraction layer" in the same way an IDE is is an abstraction layer. | | |
| ▲ | bluefirebrand 8 hours ago | parent | next [-] | | > The argument against this is that human coders are also non-deterministic, so does it really matter if it's a human or an AI agent producing the code Actually yes, because Humans can be held accountable for the code they produce Holding humans accountable for code that LLMs produce would be entirely unreasonable And no, shifting the full burden of responsibility to the human reviewing the LLM output is not reasonable either Edit: I'm of the opinion that businesses are going to start trying to use LLMs as accountability sinks. It's no different than the driver who blames Google Maps when they drive into a river following its directions. Humans love to blame their tools. | | |
| ▲ | bit-anarchist 7 hours ago | parent [-] | | > Holding humans accountable for code that LLMs produce would be entirely unreasonable Why? LLMs have no will nor agency of their own, they can only generate code when triggered. This means that either nature triggered them, or people did. So there isn't a need to shift burdens around, it's already on the user, or, depending on the case, whoever forced such user to use LLMs. |
| |
| ▲ | ModernMech 7 hours ago | parent | prev [-] | | Human coders and IDEs are not purported to be abstraction layers. |
| |
| ▲ | whattheheckheck 9 hours ago | parent | prev [-] | | It can loop and probabilistically converge to a set of standards verified against a standard set of eval inputs |
|
|
| ▲ | seattle_spring 9 hours ago | parent | prev | next [-] |
| Higher level languages that abstract assembly code are deterministic. AI, on the other hand, is not. |
|
| ▲ | jncfhnb 9 hours ago | parent | prev | next [-] |
| That is abstraction of the implementation of the tool, not the output. Producing outputs you don’t understand is novel |
|
| ▲ | pavel_lishin 9 hours ago | parent | prev [-] |
| You could say that about atomic bombs, too. |