Remix.run Logo
UncleOxidant 8 hours ago

The memory problem is already being addressed in various ways - antigravity seems to keep a series of status/progress files describing what's been done, what needs doing, etc. A bit clunky, but it seems to work - I can open it up on a repo that I was working in a few days back and it seems to pick up this context such that I don't have to completely bring it up to speed every time like I used to have to do. I've heard that claude code has similar mechanisms.

I've been doing stuff with recent models (gemini 3, claude 4.5/6, even smaller, open models like GLM5 and Qwen3-coder-next) that was just unthinkable a few months back. Compiler stuff, including implementing optimizations, generating code to target a new, custom processor, etc. I can ask for a significant new optimization feature in our compiler before going to lunch and come back to find it implemented and tested. This is a compiler that targets a custom processor so there is also verilog code involved. We're having the AI make improvements on both the hardware and software sides - this is deep-in-the-weeds complex stuff and AI is starting to handle it with ease. There are getting to be fewer and fewer things in the ticket tracker that AI can't implement.

A few months ago I would've completely agreed with you, but the game is changing very rapidly now.

taysco 8 hours ago | parent [-]

this works fine for like 2-3 small instruction sets. once you start getting to scale of a real enterprise system, the AI falls down and can't handle that amount of context. It will start ignoring critical pieces or not remember them. And without constant review AI will start priotizing things that are not your business priority.

I don't agree they have solved this problem, at all, or really in any way that's actually usable.

UncleOxidant 7 hours ago | parent [-]

What I'm saying is, don't get to thinking that the memory problem is some kind of insurmountable, permanent barrier that's going to keep us safe. It's already being addressed, maybe crudely at first, but the situation is already much better than it was - I no longer have to bring the model up to speed completely every time I start a new session. Part of this is much larger context windows (1M tokens now). New architectures are also being proposed to deal with the issue, as well.