▲ | armchairhacker 2 days ago | |||||||||||||||||||||||||||||||
RL doesn't completely "work" yet, it still has a scalability problem. Claude can write a small project, but as it becomes larger, Claude gets confused and starts making mistakes. I used to think the problem was that models can't learn over time like humans, but maybe that can be worked around. Today's models have large enough context windows to fit a medium sized project's complete code and documentation, and tomorrow's may be larger; good-enough world knowledge can be maintained by re-training every few months. The real problem is that even models with large context windows struggle with complexity moreso than humans; they miss crucial details, then become very confused when trying to correct their mistakes and/or miss other crucial details (whereas humans sometimes miss crucial details, but are usually able to spot them and fix them without breaking something else). Reliability is another issue, but I think it's related to scalability: an LLM that cannot make reliable inferences from a small input data, cannot grow that into a larger output data without introducing cascading hallucinations. EDIT: creative control is also superseded by reliability and scalability. You can generate any image imaginable with a reliable diffusion model, by first generating something vague, then repeatedly refining it (specifying which details to change and which to keep), each refinement closer to what you're imagining. Except even GPT-4o isn't nearly reliable enough for this technique, because while it can handle a couple refinements, it too starts losing details (changing unrelated things). | ||||||||||||||||||||||||||||||||
▲ | dceddia 2 days ago | parent | next [-] | |||||||||||||||||||||||||||||||
I wonder how much of this is that code is less explicit than written language in some ways. With English, the meaning of a sentence is mostly self-contained. The words have inherent meaning, and if they’re not enough on their own, usually the surrounding sentences give enough context to infer the meaning. Usually you don’t have to go looking back 4 chapters or look in another book to figure out the implications of the words you’re reading. When you DO need to do that (maybe reading a research paper for instance), the connected knowledge is all at the same level of abstraction. But with code, despite it being very explicit at the token level, the “meaning” is all over the map, and depends a lot on the unwritten mental models the person was envisioning when they wrote it. Function names might be incorrect in subtle or not-so-subtle ways, and side effects and order of execution in one area could affect something in a whole other part of the system (not to mention across the network, but that seems like a separate case to worry about). There’s implicit assumptions about timing and such. I don’t know how we’d represent all this other than having extensive and accurate comments everywhere, or maybe some kind of execution graph, but it seems like an important challenge to tackle if we want LLMs to get better at reasoning about larger code bases. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | bionhoward 2 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||
Claude and 4o aren’t RL trained IIRC? Also, who’s using these for code? You’re cool not being able to train on your chat logs used to develop your own codebase? Sounds pretty sus |