▲ | bn-l 8 days ago | ||||||||||||||||
You’re spending $1500 in additional costs? How?!!? I can’t even conceive of how I would spend that much with cursor. What am I missing? Are you ultra productive or just inefficient with tokens? | |||||||||||||||||
▲ | vincent_s 8 days ago | parent [-] | ||||||||||||||||
Being inefficient with tokens actually makes you super productive. It's too expensive in the long run though. The last few weeks have been quite frustrating with Cursor. I dove deep into the issue and figured that the most annoying problem - which leads to all those frustratingly poor replies from the LLM - is how Cursor cuts down the context. You can test this yourself: just add a long file to the chat and ask if it can see the file. Recently I discovered that all these problems disappear with the "max" models. This is exactly what I wanted. The price of 5¢ per request is manageable, the real issue is the cost for tool use in agent mode (see my other comment). | |||||||||||||||||
|