▲ | hirako2000 4 days ago | |
Not clear how it gets around what is, ultimately, a context limit. I've been fiddling with some process too, would be good if you shared the how. The readme looks like yet another full fledged app. | ||
▲ | sdesol 3 days ago | parent [-] | |
Yes there is a context window limit, but I've found for most frontier models, you can generate very effective code if the context window is under 75,000 tokens provided the context is consistent. You have to think of everything from a probability point of view and the more logical the context, the greater the chances of better code. For example, if the frontend doesn't need to know the backend code (other than the interface) not including the backend code to solve a frontend one to solve a specific problem can reduce context size and improve the chances of expected output. You just need to ensure you include the necessary interface documenation. As for the full fledged app, I think you raised a good point and I should add a 'No lock in' section for why to use it. The app has a message tool that lets you pick and choose what messages to copy. Once you've copied the context (including any conversation messages that can help the LLM), you can use the context where ever you want. My strategy with the app is to be the first place you goto to start a conversation before you even generate code, so my focus is helping you construct contexts (the smaller the better) to feed into LLMs. |