Remix.run Logo
Workaccount2 3 hours ago

The real issue is context size. You kinda need to know what you are doing in order to construct the project in pieces, and know what to tell the LLM when you spin up a new instance with fresh context to work on a single subsection. It's unwieldy and inefficient, and the model inevitably gets confused when it can effectively look at the whole code base.

Gemini 2.5 is much better in this regard, it can make decent output up to around 100k tokens compared to claude 3.7 starting to choke around 32k. Long term it remains to see if this will remain an issue. If models can get to 5M context and perform like current model with 5k context, it would be a total game changer.