Remix.run Logo
u_sama 7 hours ago

Excited to use 1 prompt and have my whole 5-hour window at 100%. They can keep releasing new ones but if they don't solve their whole token shrinkage and gaslighting it is not gonna be interesting to se.

lbreakjai 7 hours ago | parent | next [-]

Solve? You solve a problem, not something you introduced on purpose.

HarHarVeryFunny 6 hours ago | parent | prev | next [-]

It seems a lot of the problem isn't "token shrinkage" (reducing plan limits), but rather changes they made to prompt caching - things that used to be cached for 1 hour now only being cached for 5 min.

Coding agents rely on prompt caching to avoid burning through tokens - they go to lengths to try to keep context/prompt prefixes constant (arranging non-changing stuff like tool definitions and file content first, variable stuff like new instructions following that) so that prompt caching gets used.

This change to a new tokenizer that generates up to 35% more tokens for the same text input is wild - going to really increase token usage for large text inputs like code.

mnicky an hour ago | parent [-]

> things that used to be cached for 1 hour now only being cached for 5 min.

Doesn't this only apply to subagents, which don't have much long-time context anyway?

fetus8 6 hours ago | parent | prev [-]

on Tuesday, with 4.6, I waited for my 5 hour window to reset, asked it to resume, and it burned up all my tokens for the next 5 hour window and ran for less than 10 seconds. I’ve never cancelled a subscription so fast.

u_sama 6 hours ago | parent [-]

I tried the Claude Extension for VSCode on WSL for a reverse engineering task, it consumed all of my tokens, broke and didn't even save the conversatioon

fetus8 5 hours ago | parent [-]

That’s truly awful. What a broken tool.