▲ | Aurornis a day ago | |
It’s also interesting to see how quickly the greenfield progress rate slows down as the projects grow. I skimmed the vibecoding subreddits for a while. It was common to see frustrations about how coding tools (Cursor, Copilot, etc) were great last month but terrible now. The pattern repeats every month, though. When you look closer it’s usually people who were thrilled when their projects were small but are now frustrated when they’re bigger. | ||
▲ | Workaccount2 3 hours ago | parent [-] | |
The real issue is context size. You kinda need to know what you are doing in order to construct the project in pieces, and know what to tell the LLM when you spin up a new instance with fresh context to work on a single subsection. It's unwieldy and inefficient, and the model inevitably gets confused when it can effectively look at the whole code base. Gemini 2.5 is much better in this regard, it can make decent output up to around 100k tokens compared to claude 3.7 starting to choke around 32k. Long term it remains to see if this will remain an issue. If models can get to 5M context and perform like current model with 5k context, it would be a total game changer. |