Remix.run Logo
nitinram 3 days ago

This is super cool! I attempted to use this on a project and kept running into "This model's maximum context length is 200000 tokens. However, your messages resulted in 459974 tokens. Please reduce the length of the messages." I used open ai o4-mini. Is there an easy way to handle this gracefully? Basically if you had thoughts on how to make some tutorials for really large codebases or project directories?

zh2408 3 days ago | parent [-]

Could you try to use gemini 2.5 pro? It's free every day for first 25 requests, and can handle 1M input tokens