|
| ▲ | swatcoder 5 hours ago | parent | next [-] |
| Consider a plumber who doesn't understand mettalurgy or electronics but relies on some foundational trade principles that they learned from a mentor and who can understand manufacturer guides for clever new fittings and pumps. That's the level that most competent software engineers should be working at. Delegating understanding to LLM's is totally different thing. It's not plumbing at all. It's more like hiring a unlicensed, generalist but well-reputed handyman from Craigslist and then going out to a movie while they do the work. It could turn out fine, or not, and if it does work out, it could even save time and money if they're rate is low enough. But it's not plumbing anymore, and you should be wary about billing plumber's rates for their work or taking on liability for it if you haven't even made sure that work meets your own standards of trade and quality. You can argue that it's "one more level of abstraction" but it's a qualitatively different kind of abstraction. And in the economy of skilled labor, and the legal landscape of accountability and liability, that difference is enormously relevant. |
|
| ▲ | discreteevent 5 hours ago | parent | prev | next [-] |
| This argument comes up a lot. The point is that with unreviewed AI nobody understood the code at any time (including the AI). This is completely different to a C compiler wherein the writers and maintainers deeply understand the code. This means that even though I don't understand it, I can use it with some confidence. Your point about AI being another abstraction similar to the "mostly deterministic" C compiler also comes up often but there are many arguments against it. If you think the determinism of a compiler and an AI are similar then I'm not sure whether you know anything about how either of them work or have even compared examples of what they produce. |
|
| ▲ | hunterpayne 5 hours ago | parent | prev | next [-] |
| That's a you problem. If you feel this way, its the universe saying that you aren't very good at writing software. Good engineers don't have this problem. PS We have way too many levels of abstraction now, that doesn't mean the right answer is to add another. Even worse unlike the others, LLMs aren't deterministic. |
| |
|
| ▲ | datsci_est_2015 4 hours ago | parent | prev | next [-] |
| There’s a large difference between understanding precisely what some code does and understanding what code intends to do. It’s why “what happens when you begin typing into your web browsers address bar?” is such a powerful question for weeding out low quality interview candidates. I’ve never worked at Google, but I can talk about how they probably handle the incoming requests. I’ve never worked on Windows OS-level software, but I can start talking about input buffers. Kind of reminds me of WIRED’s “5 Levels” series… Anyway, my point is prompts are non deterministic and there’s no way of inferring what code output by an LLM is intended to do because that’s not how LLMs work |
| |
| ▲ | jcgrillo 4 hours ago | parent [-] | | > because that’s not how LLMs work It's almost impossible to have a rational discussion about the effects of this technology because this point is so easily lost. Even super smart, credentialed, expert people easily (and often!) fall into the trap of anthropomorphizing the bot because it makes human noises. It's really important to remember the mechanical principles underlying its function. No different from any other computer program in that respect, the difference is the psychological hold it gets on the user. There is no intention behind its actions, but it's very easy to hallucinate one because with every other thing that speaks human language there is some intention behind the words and actions. |
|
|
| ▲ | psychoslave 4 hours ago | parent | prev | next [-] |
| Precisely they are deterministic, so extrem cases apart, we could expect that given the documentation and a peace of code, engineers would most of the time be able to translate properly to assembly and explain what the assembly actually trigger in mechanical terms. LLMs, as pushed currently, are not deterministic. Moreover, I yet have to see a compiler whose output try to convince me I'm completely right and bring very smart interesting point on the table. Quite the contrary actually, though generally errors messages are not explicitly telling users how stupid the proposed code is as it doesn't even pass mere syntax and fundamental logic requirements. |
|
| ▲ | dogleash 5 hours ago | parent | prev | next [-] |
| To the extent that's true, it's already a problem plaguing the profession. I wouldn't advocate for using different tools, but everyone should be able to reason about the machine instructions underlying their code. Both in the immediate sense of the assembly a simple function turns into, and the tricks language runtimes use to enable their neat features. The attitude that things are magic is poison. There is a difference between feeling confident something is comprehensible and not yet needing to go learn it, vs resigning to a position of powerlessness. |
| |
| ▲ | ijk 5 hours ago | parent [-] | | I agree in principle, but every time I run a debugger on modern C++ it makes it clear that, rather than being a simple and cutesy transformation, "compiler optimization" is actually black magic. | | |
| ▲ | skydhash 4 hours ago | parent [-] | | It’s not. You “only” have to learn about computer architecture and computation theory. Meaning a lot of maths /s |
|
|
|
| ▲ | skydhash 4 hours ago | parent | prev [-] |
| The thing is that C is formal by itself. Opcodes, Assembly, C, Python, Common Lisp,… are all equivalent to each other, meanings there’s no statement and no algorithm that you can’t map to each other. That’s what it means to be Turing Complete. The main issue is that not everyone cares about the semantic of what they’re writing. You don’t need to know assembly to talk about C’s semantic or know C to talk about Python semantics. It does not require going up and down some abstraction tower. |