| ▲ | klabb3 10 days ago |
| Yes I do! Is that some sort of gotcha? If I can choose between having a script that queries the db and generates a report and “Dave in marketing” who “has done it for years”, I’m going to pick the script. Who wouldn’t? Until machines can reliably understand, operate and self-correct independently, I’d rather not give up debuggability and understandability. |
|
| ▲ | OkGoDoIt 9 days ago | parent | next [-] |
| I think this comment and the parent comment are talking about two different things. One of you is talking about using nondeterministic ML to implement the actual core logic (an automated script or asking Dave to do it manually), and one of you is talking about using it to design the logic (the equivalent of which is writing that automated script). LLM’s are not good at actually doing the processing, they are not good at math or even text processing at a character level. They often get logic wrong. But they are pretty good at looking at patterns and finding creative solutions to new inputs (or at least what can appear creative, even if philosophically it’s more pattern matching than creativity). So an LLM would potentially be good at writing a first draft of that script, which Dave could then proofread/edit, and which a standard deterministic computer could just run verbatim to actually do the processing. Eventually maybe even Dave’s proofreading would be superfluous. Tying this back to the original article, I don’t think anyone is proposing having an LLM inside a chip that processes incoming data in a non-deterministic way. The article is about using AI to design the chips in the first place. But the chips would still be deterministic, the equivalent of the script in this analogy. There are plenty of arguments to make about LLM‘s not being good enough for that, not being able to follow the logic or optimize it, or come up with novel architectures. But the shape of chip design/Verilog feels like something that with enough effort, an AI could likely be built that would be pretty good at it. All of the knowledge that those smart knowledgeable engineers which are good at writing Verilog have built up can almost certainly be represented in some AI form, and I wouldn’t bet against AI getting to a point where it can be helpful similarly to how Copilot currently is with code completion. Maybe not perfect anytime soon, but good enough that we could eventually see a path to 100%. It doesn’t feel like there’s a fundamental reason this is impossible on a long enough time scale. |
| |
| ▲ | klabb3 9 days ago | parent | next [-] | | > So an LLM would potentially be good at writing a first draft of that script, which Dave could then proofread/edit Right, and there’s nothing fundamentally wrong with this, nor is it a novel method. We’ve been joking about copying code from stack overflow for ages, but at least we didn’t pretend that it’s the peak of human achievement. Ask a teacher the difference between writing an essay and proofreading it. Look, my entire claim from the beginning is that understanding is important (epistemologically, it may be what separates engineering from alchemy, but I digress). Practically speaking, if we see larger and larger pieces of LLM written code, it will be similar to Dave and his incomprehensible VBA script. It works, but nobody knows why. Don’t get me wrong, this isn’t new at all. It’s an ever-present wet blanket that slowly suffocates engineering ventures who don’t pay attention and actively resist. In that context, uncritically inviting a second wave of monkeys to the nuclear control panels, that’s what baffles me. | | |
| ▲ | crabmusket 9 days ago | parent [-] | | > We’ve been joking about copying code from stack overflow for ages Tangent for a slight pet peeve of mine: "We" did joke about this, but probably because most of our jobs are not in chip design. "We" also know the limits of this approach. The fact that Stack Overflow is the most SEO optimised result for "how to center div" (which we always forget how to do) doesn't have any bearing on the times when we have an actual problem requiring our attention and intellect. Say diagnosing a performance issue, negotiating requirements and how they subtly differ in an edge case from the current system behaviour, discovering a shared abstraction in 4 pieces of code that are nearly but not quite the same. I agree with your posts here, the Stack Overflow thing in general is just a small hobby horse I have. | | |
| ▲ | mrguyorama 7 days ago | parent [-] | | Also the Stack Overflow thing has more to do with all of us being generalists, not incompetent. I look up "how do I sort a list in language X" because I know from school that there IS a defined good way to do it, probably built into the language, and it will be extremely idiomatic, but I haven't used language X in five years and the specifics might have changed and I don't remember the specific punctuation. |
|
| |
| ▲ | hulitu 8 days ago | parent | prev [-] | | > So an LLM would potentially be good at writing a first draft of that script, which Dave could then proofread/edit, and which a standard deterministic computer could just run verbatim to actually do the processing Or Dave could write a first draft of that script, saving him the time needed to translate what the LLM composed. |
|
|
| ▲ | og_kalu 9 days ago | parent | prev [-] |
| >If I can choose between having a script that queries the db and generates a report and “Dave in marketing” who “has done it for years” If you could that would be nice wouldn't it? And if you couldn't? If people were saying, "let's replace Casio Calculators with interfaces to GPT" then that would be crazy and I would wholly agree with you but by and large, the processes people are scrambling to place LLMs in are ones that typical machines struggle or fail and humans excel or do decently (and that LLMs are making some headway in). You're making the wrong distinction here. It's not Dave vs your nifty script. It's Dave or nothing at all. There's no point comparing LLM performance to some hypothetical perfect understanding machine that doesn't exist. You compare to the things its meant to replace - humans. How well can the LLM do this compared to Dave ? |
| |
| ▲ | kuhewa 9 days ago | parent [-] | | > by and large, the processes people are scrambling to place LLMs in are ones that typical machines struggle or fail I'm pretty sure they are scrambling to put them absolutely anywhere it might save or make a buck (or convince an investor that it could) | | |
| ▲ | blincoln 9 days ago | parent | next [-] | | 100%, and a lot of them are truly terrible use cases for LLMs. For example, using a LLM to transform structured data into JSON, and doing it with two LLMs in parallel to try to catch the inevitable failures, instead of just writing code that outputs JSON. | | |
| ▲ | og_kalu 8 days ago | parent [-] | | Your example does not make much sense (in response to OP). That's not saving anybody any money. |
| |
| ▲ | og_kalu 8 days ago | parent | prev [-] | | If your task was being solved well by a deterministic script/algorithm, you are not going to save money porting to LLMs even if you use Open Source models. | | |
| ▲ | kuhewa 8 days ago | parent [-] | | 'could' is doing a whole lot of work in that sentence, I'm being charitable. Reality is LLMs are being crammed in places where it isn't very sensible under thin justifications, just like the last few big ideas were (c.f. blockchain) | | |
| ▲ | og_kalu 8 days ago | parent [-] | | If it can't be solved by a script then what's problem with seeing if you can use LLMs ? I guess I just don't see your point. So a few purported applications are not very sensible. So what ? This is every breakthrough ever. |
|
|
|
|