| |
| ▲ | esafak 2 hours ago | parent [-] | | Models do not need to hold the whole code base in memory, and neither do you. You both search for what you need. Models can already memorize more than you ! | | |
| ▲ | Jensson 2 hours ago | parent | next [-] | | > Models do not need to hold the whole code base in memory, and neither do you Humans rewire their mind to optimize it for the codebase, that is why new programmers takes a while to get up to speed in the codebase. LLM doesn't do that and until they do they need the entire thing in context. And the reason we can't do that today is that there isn't enough data in a single codebase to train an LLM to be smart about it, so first we need to solve the problem that LLM needs billions of examples to do a good job. That isn't on the horizon so we are probably safe for a while. | | |
| ▲ | esafak 2 hours ago | parent [-] | | Getting up to speed is a human problem. Computers are so fast they can 'get up to speed' from scratch for every session, and we help them with AGENTS files and newer things like memories; e.g., https://code.claude.com/docs/en/memory It is not perfect yet but the tooling here is improving. I do not see a ceiling here. LSPs + memory solve this problem. I run into issues but this is not a big one for me. |
| |
| ▲ | Nextgrid 2 hours ago | parent | prev [-] | | I’ll believe it when coding agents can actually make concise & reusable code instead of reimplementing 10 slightly-different versions of the same basic thing on every run (this is not a rant, I would love for agents to stop doing that, and I know how to make them - with proper AGENTS.md that serves as a table of contents for where stuff is - but my point is that as a human I don’t need this and yet they still do for now). | | |
| ▲ | Revanche1367 2 hours ago | parent [-] | | In my experience they can definitely write concise and reusable code. You just need to say to them “write concise and reusable code.” Works well for Codex, Claude, etc. | | |
| ▲ | Nextgrid 2 hours ago | parent [-] | | Writing reusable code is of no use if the next iteration doesn’t know where it is and rewrites the same (reusable) code again. | | |
| ▲ | munksbeer an hour ago | parent [-] | | I guide the AI. If I see it produce stuff that I think can be done better, I either just do it myself or point it in the right direction. It definitely doesn't do a good job of spotting areas ripe of building abstractions, but that is our job. This thing does the boring parts, and I get to use my creativity thinking how to make the code more elegant, which is the part I love. As far as I can tell, what's not to love about that? | | |
| ▲ | Nextgrid an hour ago | parent [-] | | If you’re repeatedly prompting, I will defer to my usual retort when it comes to LLM coding: programming is about translating unclear requirements in a verbose (English) language into a terse (programming) language. It’s generally much faster for me to write the terse language directly than play a game of telephone with an intermediary in the verbose language for it to (maybe) translate my intentions into the terse language. In your example, you mention that you prompt the AI and if it outputs sub-par results you rewrite it yourself. That’s my point: over time, you learn what an LLM is good at and what it isn’t, and just don’t bother with the LLM for the stuff it’s not good at. Thing is, as a senior engineer, most of the stuff you do shouldn’t be stuff that an LLM is good at to begin with. That’s not the LLM replacing you, that’s the LLM augmenting you. Enjoy your sensible use of LLMs! But LLMs are not the silver bullet the billion dollars of investment desperately want us to believe. | | |
| ▲ | AstroBen 36 minutes ago | parent [-] | | > programming is about translating unclear requirements in a verbose (English) language into a terse (programming) language Why are we uniquely capable of doing that, but an LLM isn't? In plan mode I've been seeing them ask for clarifications and gather further requirements Important business context can be provided to them, also |
|
|
|
|
|
|
|