▲ | vincent_s 8 days ago | |||||||
Being inefficient with tokens actually makes you super productive. It's too expensive in the long run though. The last few weeks have been quite frustrating with Cursor. I dove deep into the issue and figured that the most annoying problem - which leads to all those frustratingly poor replies from the LLM - is how Cursor cuts down the context. You can test this yourself: just add a long file to the chat and ask if it can see the file. Recently I discovered that all these problems disappear with the "max" models. This is exactly what I wanted. The price of 5¢ per request is manageable, the real issue is the cost for tool use in agent mode (see my other comment). | ||||||||
▲ | bn-l 7 days ago | parent [-] | |||||||
thanks for the reply. Do you have a write up on how you use cursor? | ||||||||
|