| ▲ | cube2222 7 hours ago | |||||||
I've been using it with `/effort max` all the time, and it's been working better than ever. I think here's part of the problem, it's hard to measure this, and you also don't know in which AB test cohorts you may currently be and how they are affecting results. | ||||||||
| ▲ | siegers 6 hours ago | parent | next [-] | |||||||
Agree. I keep effort max on Claude and xhigh on GPT for all tasks and keep tasks as scoped units of work instead of boil the ocean type prompts. It is hard to measure but ultimately the tasks are getting completed and I'm validating so I consider it "working as expected". | ||||||||
| ▲ | bryanlarsen 6 hours ago | parent | prev [-] | |||||||
It works better, until you run out of tokens. Running out of tokens is something that used to never happen to me, but this month now regularly happens. Maybe I could avoid running out of tokens by turning off 1M tokens and max effort, but that's a cure worse than the disease IMO. | ||||||||
| ||||||||