| ▲ | bcherny 7 hours ago |
| Hey, Boris from the Claude Code team here. Normally, when you have a conversation with Claude Code, if your convo has N messages, then (N-1) messages hit prompt cache -- everything but the latest message. The challenge is: when you let a session idle for >1 hour, when you come back to it and send a prompt, it will be a full cache miss, all N messages. We noticed that this corner case led to outsized token costs for users. In an extreme case, if you had 900k tokens in your context window, then idled for an hour, then sent a message, that would be >900k tokens written to cache all at once, which would eat up a significant % of your rate limits, especially for Pro users. We tried a few different approaches to improve this UX: 1. Educating users on X/social 2. Adding an in-product tip to recommend running /clear when re-visiting old conversations (we shipped a few iterations of this) 3. Eliding parts of the context after idle: old tool results, old messages, thinking. Of these, thinking performed the best, and when we shipped it, that's when we unintentionally introduced the bug in the blog post. Hope this is helpful. Happy to answer any questions if you have. |
|
| ▲ | dbeardsl 6 hours ago | parent | next [-] |
| I appreciate the reply, but I was never under the impression that gaps in conversations would increase costs nor reduce quality. Both are surprising and disappointing. I feel like that is a choice best left up to users. i.e. "Resuming this conversation with full context will consume X% of your 5-hour usage bucket, but that can be reduced by Y% by dropping old thinking logs" |
| |
| ▲ | kiratp 20 minutes ago | parent | next [-] | | By caching they mean “cached in GPU memory”. That’s a very very scarce resource. Caching to RAM and disk is a thing but it’s hard to keep performance up with that and it’s early days of that tech being deployed anywhere. Disclosure: work on AI at Microsoft. Above is just common industry info (see work happening in vLLM for example) | |
| ▲ | giwook 4 hours ago | parent | prev | next [-] | | Another way to think about it might be that caching is part of Anthropic's strategy to reduce costs for its users, but they are now trying to be more mindful of their costs (probably partly due to significant recent user growth as well as plans to IPO which demand fiscal prudence). Perhaps if we were willing to pay more for our subscriptions Anthropic would be able to have longer cache windows but IDK one hour seems like a reasonable amount of time given the context and is a limitation I'm happy to work around (it's not that hard to work around) to pay just $100 or $200 a month for the industry-leading LLM. Full disclosure: I've recently signed up for ChatGPT Pro as well in addition to my Claude Max sub so not really biased one way or the other. I just want a quality LLM that's affordable. | | |
| ▲ | jimkleiber an hour ago | parent | next [-] | | I might be willing to pay more, maybe a lot more, for a higher subscription than claude max 20x, but the only thing higher is pay per token and i really dont like products that make me have to be that minutely aware of my usage, especially when it has unpredictability to it. I think there's a reason most telecoms went away from per minute or especially per MB charging. Even per GB, as they often now offer X GB, and im ok with that on phone but much less so on computer because of the unpredictability of a software update size. Kinda like when restaurants make me pay for ketchup or a takeaway box, i get annoyed, just increase the compiled price. | |
| ▲ | sharts 3 hours ago | parent | prev [-] | | That doesn’t make sense to pay more for cache warming. Your session for the most part is already persisted. Why would it be reasonable to pay again to continue where you left off at any time in the future? | | |
| ▲ | jeremyjh 3 hours ago | parent | next [-] | | Because it significantly increases actual costs for Anthropic. If they ignored this then all users who don’t do this much would have to subsidize the people who do. | |
| ▲ | cadamsdotcom an hour ago | parent | prev [-] | | Sure, it wouldn’t make sense if they only had one customer to serve :) |
|
| |
| ▲ | JumpCrisscross 6 hours ago | parent | prev | next [-] | | > I was never under the impression that gaps in conversations would increase costs The UI could indicate this by showing a timer before context is dumped. | | |
| ▲ | karsinkk 6 hours ago | parent | next [-] | | Yes!!
A UI widget that shows how far along on the prompt cache eviction timelines we are would be great. | |
| ▲ | vyr 2 hours ago | parent | prev | next [-] | | a countdown clock telling you that you should talk to the model again before your streak expires? that's the kind of UX i'd expect from an F2P mobile game or an abandoned shopping cart nag notification | | |
| ▲ | abustamam 2 hours ago | parent [-] | | Well sure if you put it that way, they're similar. But it's either you don't see it and you get surprised by increased quota usage, or you do see it and you know what it means. Bonus points if they let you turn it off. No need to gamify it. It's just UI. |
| |
| ▲ | jimkleiber an hour ago | parent | prev [-] | | I tried to hack the statusline to show this but when i tried, i don't think the api gave that info. I'd love if they let us have more variables to access in the statusline. |
| |
| ▲ | computably 6 hours ago | parent | prev | next [-] | | > I was never under the impression that gaps in conversations would increase costs nor reduce quality. Both are surprising and disappointing. You didn't do your due diligence on an expensive API. A naïve implementation of an LLM chat is going to have O(N^2) costs from prompting with the entire context every time. Caching is needed to bring that down to O(N), but the cache itself takes resources, so evictions have to happen eventually. | | |
| ▲ | doesnt_know 6 hours ago | parent | next [-] | | How do you do "due diligence" on an API that frequently makes undocumented changes and only publishes acknowledgement of change after users complain? You're also talking about internal technical implementations of a chat bot. 99.99% of users won't even understand the words that are being used. | | |
| ▲ | tempest_ an hour ago | parent [-] | | I use CC, and I understand what caching means. I have no idea how that works with a LLM implementation nor do I actually know what they are caching in this context. |
| |
| ▲ | solarkraft 6 hours ago | parent | prev | next [-] | | I somewhat disagree that this is due diligence. Claude Code abstracts the API, so it should abstract this behavior as well, or educate the user about it. | | |
| ▲ | mpyne 4 hours ago | parent [-] | | > Claude Code abstracts the API, so it should abstract this behavior as well, or educate the user about it. Does mmap(2) educate the developer on how disk I/O works? At some point you have to know something about the technology you're using, or accept that you're a consumer of the ever-shifting general best practice, shifting with it as the best practice shifts. | | |
| ▲ | websap an hour ago | parent | next [-] | | Does using print() in Python means I need to understand the Kernel? This is an absurd thought. | |
| ▲ | zem 3 hours ago | parent | prev [-] | | mmap(2) and all its underlying machinery are open source and well documented besides. | | |
| ▲ | mpyne 3 hours ago | parent [-] | | There are open-source and even open-weight models that operate in exactly this way (as it's based off of years of public research), and even if there weren't the way that LLMs generate responses to inputs is superbly documented. Seems like every month someone writes up a brilliant article on how to build an LLM from scratch or similar that hits the HN page, usually with fancy animated blocks and everything. It's not at all hard to find documentation on this topic. It could be made more prominent in the U/I but that's true of lots of things, and hammering on "AI 101" topics would clutter the U/I for actual decision points the user may want to take action upon that you can't assume the user already knows about in the way you (should) be able to assume about how LLMs eat up tokens in the first place. |
|
|
| |
| ▲ | margalabargala 5 hours ago | parent | prev | next [-] | | Okay, sure. There's a dollar/intelligence tradeoff. Let me decide to make it, don't silently make Claude dumber because I forgot about a terminal tab for an hour. Just because a project isn't urgent doesn't mean it's not important. If I thought it didn't need intelligence I would use Sonnet or Haiku. | |
| ▲ | someguyiguess 6 hours ago | parent | prev | next [-] | | Yes. It’s perfectly reasonable to expect the user to know the intricacies of the caching strategy of their llm. Totally reasonable expectation. | | |
| ▲ | jghn 3 hours ago | parent | next [-] | | To some extent I'd say it is indeed reasonable. I had observed the effect for a while: if I walked away from a session I noticed that my next prompt would chew up a bunch of context. And that led me to do some digging, at which point I discovered their prompt caching. So while I'd agree with your sarcasm that expecting users to be experts of the system is a big ask, where I disagree with you is that users should be curious and actively attempting to understand how it works around them. Given that the tooling changes often, this is an endless job. | | |
| ▲ | abustamam 2 hours ago | parent [-] | | > users should be curious and actively attempting to understand how it works Have you ever talked with users? > this is an endless job Indeed. If we spend all our time learning what changed with all our tooling when it changes without proper documentation then we spend all our working lives keeping up instead of doing our actual jobs. | | |
| ▲ | Octoth0rpe an hour ago | parent [-] | | There are general users of the average SaaS, and there are claude code users. There's no doubt in my mind that our expectations should be somewhat higher for CC users re: memory. I'm personally not completely convinced that cache eviction should be part of their thought process while using CC, but it's not _that_ much of a stretch. |
|
| |
| ▲ | coldtea 4 hours ago | parent | prev [-] | | It's not like they have a poweful all-knowing oracle that can explain it to them at their dispos... oh, wait! | | |
| ▲ | esafak 4 hours ago | parent [-] | | They have to know that this could bite them and to ask the question first. | | |
| ▲ | nixpulvis 4 hours ago | parent [-] | | I do think having some insight into the current state of the cache and a realistic estimate for prompt token use is something we should demand. |
|
|
| |
| ▲ | exac 4 hours ago | parent | prev | next [-] | | It is more useful to read posts and threads like this exact thread IMO. We can't know everything, and the currently addressed market for Claude Code is far from people who would even think about caching to begin with. | |
| ▲ | kovek 5 hours ago | parent | prev | next [-] | | What if the cache was backed up to cold storage? Instead of having to recompute everything. | |
| ▲ | bontaq 4 hours ago | parent | prev | next [-] | | How's that O(N^2)? How's it O(N) with caching? Does a 3 turn conversation cost 3 times as much with no caching, or 9 times as much? | | | |
| ▲ | kang 5 hours ago | parent | prev | next [-] | | It seems you haven't done the due diligence on what part of the API is expensive - constructing a prompt shouldn't be same charge/cost as llm pass. | | |
| ▲ | coldtea 4 hours ago | parent [-] | | It seems you haven't done the due diligence on what the parent meant :) It's not about "constructing a prompt" in the sense of building the prompt string. That of course wouldn't be costly. It is about reusing llm inference state already in GPU memory (for the older part of the prompt that remains the same) instead of rerunning the prompt and rebuilding those attention tensors from scratch. | | |
| ▲ | kang 4 hours ago | parent [-] | | You not only skipped the diligence but confused everyone repeating what I said :( that is what caching is doing. the llm inference state is being reused. (attention vectors is internal artefact in this level of abstraction, effectively at this level of abstraction its a the prompt). The part of the prompt that has already been inferred no longer needs to be a part of the input, to be replaced by the inference subset. And none of this is tokens. |
|
| |
| ▲ | raron 5 hours ago | parent | prev | next [-] | | How big this cached data is? Wouldn't it be possible to download it after idling a few minutes "to suspend the session", and upload and restore it when the user starts their next interaction? | | |
| ▲ | throwdbaaway 4 hours ago | parent | next [-] | | Should be about 10~20 GiB per session. Save/restore is exactly what DeepSeek does using its 3FS distributed filesystem: https://github.com/deepseek-ai/3fs#3-kvcache With this much cheaper setup backed by disks, they can offer much better caching experience: > Cache construction takes seconds. Once the cache is no longer in use, it will be automatically cleared, usually within a few hours to a few days. | |
| ▲ | cyanydeez 4 hours ago | parent | prev [-] | | I often see a local model QWEN3.5-Coder-Next grow to about 5 GB or so over the course of a session using llamacpp-server. I'd better these trillion parameter models are even worse. Even if you wanted to download it or offload it or offered that as a service, to start back up again, you'd _still_ be paying the token cost because all of that context _is_ the tokens you've just done. The cache is what makes your journey from 1k prompt to 1million token solution speedy in one 'vibe' session. Loading that again will cost the entire journey. |
| |
| ▲ | miroljub 4 hours ago | parent | prev [-] | | This sounds like a religious cult priest blaming the common people for not understanding the cult leader's wish, which he never clearly stated. |
| |
| ▲ | nixpulvis 4 hours ago | parent | prev | next [-] | | How else would you implement it? | |
| ▲ | cyanydeez 4 hours ago | parent | prev [-] | | It'd probably be helpful for power users and transparency to actually show how the cache is being used. If you run local models with llamacpp-server, you can watch how the cache slots fill up with every turn; when subagents spawn, you see another process id spin up and it takes up a cache slot; when the model starts slowing down is when the context grows (amd 395+ around 80-90k) and the cache loads are bigger because you've got all that. So yeah, it doesn't take much to surface to the user that the speed/value of their session is ephemeral because to keep all that cache active is computationally expensive because ... You're still just running text through a extremely complex process, and adding to that text and to avoid re-calculation of the entire chain, you need the cache. |
|
|
| ▲ | btown 6 hours ago | parent | prev | next [-] |
| Is there a way to say: I am happy to pay a premium (in tokens or extra usage) to make sure that my resumed 1h+ session has all the old thinking? I understand you wouldn't want this to be the default, particularly for people who have one giant running session for many topics - and I can only imagine the load involved in full cache misses at scale. But there are other use cases where this thinking is critical - for instance, a session for a large refactor or a devops/operations use case consolidating numerous issue reports and external findings over time, where the periodic thinking was actually critical to how the session evolved. For example, if N-4 was a massive dump of some relevant, some irrelevant material (say, investigating for patterns in a massive set of data, but prompted to be concise in output), then N-4's thinking might have been critical to N-2 not getting over-fixated on that dump from N-4. I'd consider it mission-critical, and pay a premium, when resuming an N some hours later to avoid pitfalls just as N-2 avoided those pitfalls. Could we have an "ultraresume" that, similar to ultrathink, would let a user indicate they want to watch Return of the (Thin)king: Extended Edition? |
| |
| ▲ | CjHuber 6 hours ago | parent | next [-] | | I think it’s crazy that they do this, especially without any notice. I would not have renewed my subscription if I knew that they started doing this. Especially in the analysis part of my work I don‘t care about the actual text output itself most of the time but try to make the model „understand“ the topic. In the first phase the actual text output itself is worthless it just serves as an indicator that the context was processed correctly and the future actual analysis work can depend on it.
And they‘re… just throwing most the relevant stuff out all out without any notice when I resume my session after a few days? This is insane, Claude literally became useless to me and I didn’t even know it until now, wasting a lot of my time building up good session context. There would be nothing lost if they said „If you click yes, we will prune your old thinking making Claude faster and saving you tons of tokens“. Most people would say yes probably so why not ask them… make it an env variable (that is announced not a secretly introduced one to opt out of something new!) or at least write it in a change log if they really don’t want to allow people to use it like before, so there‘d be chance to cancel the subscription in time instead of wasting tons of time on work patterns that not longer work | | |
| ▲ | kiratp 16 minutes ago | parent | next [-] | | OpenAI does this for all API calls > Our systems will smartly ignore any reasoning items that aren’t relevant to your functions, and only retain those in context that are relevant. You can pass reasoning items from previous responses either using the previous_response_id parameter, or by manually passing in all the output items from a past response into the input of a new one. https://developers.openai.com/api/docs/guides/reasoning Disclosure - work on AI@msft | |
| ▲ | munk-a 6 hours ago | parent | prev | next [-] | | Pointing at their terms of service will definitely be the instantly summoned defense (as would most modern companies) but the fact that SaaS can so suddenly shift the quality of product being delivered for their subscription without clear notification or explicitly re-enrollment is definitely a legal oversight right now and Italy actually did recently clamp down on Netflix doing this[1]. It's hard to define what user expectations of a continuous product are and how companies may have violated it - and for a long time social constructs kept this pretty in check. As obviously inactive and forgotten about subscriptions have become a more significant revenue source for services that agreement has been eroded, though, and the legal system has yet to catch up. 1. Specifically, this suite was about price increases without clear consideration for both parties - but the same justifications apply to service restrictions without corresponding price decreases. https://fortune.com/2026/04/20/italian-court-netflix-refunds... | |
| ▲ | jetbalsa 6 hours ago | parent | prev | next [-] | | So to defend a litte, its a Cache, it has to go somewhere, its a save state of the model's inner workings at the time of the last message. so if it expires, it has to process the whole thing again. most people don't understand that every message the ENTIRE history of the conversion is processed again and again without that cache. That conversion might of hit several gigs worth of model weights and are you expecting them to keep that around for /all/ of your conversions you have had with it in separate sessions? | | |
| ▲ | 3836293648 6 hours ago | parent | next [-] | | No? It's not because it's a cache, it's because they're scared of letting you see the thinking trace. If you got the trace you could just send it back in full when it got evicted from the cache. This is how open weight models work. | | |
| ▲ | mpyne 4 hours ago | parent | next [-] | | The trace goes back fine, that's not the issue. The issue is that if they send the full trace back, it will have to be processed from the start if the cache expired, and doing that will cause a huge one-time hit against your token limit if the session has grown large. So what Boris talked about is stripping things out of the trace that goes back to regenerate the session if the cache expires. Doing this would help avert burning up the token limit, but it is technically a different conversation, so if CC chooses poorly on stripping parts of the context then it would lead to Claude getting all scatter-brained. | |
| ▲ | reactordev 6 hours ago | parent | prev | next [-] | | They are sending it back to the cache, the part you are missing is they were charging you for it. | | |
| ▲ | eknkc 6 hours ago | parent [-] | | The blog post says they prune them now not to charge you. That’s the change they implemented. | | |
| ▲ | reactordev 5 hours ago | parent [-] | | right. they were charging you for it, now they aren't because they are just dropping your conversation history. |
|
| |
| ▲ | eknkc 6 hours ago | parent | prev [-] | | I’m not familiar with the Claude API but OpenAI has an encrypted thking messages option. You get something that you can send back but it is encrypted. Not available on Anthropic? |
| |
| ▲ | rsfern 5 hours ago | parent | prev | next [-] | | It seems like an opportunity for a hierarchical cache. Instead of just nuking all context on eviction, couldn’t there be an L2 cache with a longer eviction time so task switching for an hour doesn’t require a full session replay? | |
| ▲ | CjHuber 5 hours ago | parent | prev | next [-] | | No of course it’s unrealistic for them to hold the cache indefinitely and that’s not the point. You are keeping the session data yourself so you can continue even after cache expiry. The point I‘m making is that it made me very angry that without any announcement they changed behavior to strip the old thinking even when you have it in your session file. There is absolutely no reason to not ask the user about if they want this And it’s part of a larger problem of unannounced changes it‘s just like when they introduced adaptive thinking to 4.6 a few weeks ago without notice. Also they seem to be completely unaware that some users might only use Claude code because they are used to it not stripping thinking in contrast to codex. Anyway I‘m happy that they saw it as a valid refund reason | |
| ▲ | cyanydeez 4 hours ago | parent | prev | next [-] | | what matters isn't that it's a cache; what matter is it's cached _in the GPU/NPU_ memory and taking up space from another user's active session; to keep that cache in the GPU is a nonstarter for an oversold product. Even putting into cold storage means they still have to load it at the cost of the compute, generally speaking because it again, takes up space from an oversold product. | |
| ▲ | 6 hours ago | parent | prev [-] | | [deleted] |
| |
| ▲ | FireBeyond an hour ago | parent | prev [-] | | > There would be nothing lost if they said „If you click yes, we will prune your old thinking making Claude faster and saving you tons of tokens“. Most people would say yes probably so why not ask them The irony is that Claude Design does this. I did a big test building a design system, and when I came back to it, it had in the chat window "Do you need all this history for your next block of work? Save 120K tokens and start a new chat. Claude will still be able to use the design system." Or words to that effect. | | |
| ▲ | CjHuber 34 minutes ago | parent [-] | | This is exactly what also confused me. I had the exact same prompt in Claude code as well, and the no option implies you can also keep the whole history. But clicking keep apparently only ever kept the user and assistant messages not the whole actual thinking parts of the conversation |
|
| |
| ▲ | trinsic2 4 hours ago | parent | prev | next [-] | | Why cant you just build a project document that outlines that prompt that you want to do? Or have claude save your progress in memory so you can pick it up later? Thats what I do. It seems abhorrent to expect to have a running prompt that left idle for long periods of time just so you can pick up at a moments whim... | | | |
| ▲ | elAhmo 6 hours ago | parent | prev [-] | | Don't you have that by just resuming old convo? The only issue is that it didn't hit the cache so it was expensive if you resume later. | | |
| ▲ | eknkc 6 hours ago | parent | next [-] | | Not at the moment apparently. They remove the thinking messages when you continue after 1 hour. That was the whole idea of that change. So the LLM gets all your messages, its responses etc but not the thinking parts, why it generated that responses. You get a lobotomised session. | | |
| ▲ | elAhmo 5 hours ago | parent [-] | | OK didn't know that. I also resume fairly old sessions with 100-200k of context, and I sometimes keep them active for a while (but with large breaks in between). Still on Opus 4.6 with no adaptive thinking, so didn't really notice anything worse in the past weeks, but who knows. |
| |
| ▲ | tbrockman 6 hours ago | parent | prev [-] | | Or generate tiny filler messages every hour until you come back to it. |
|
|
|
| ▲ | uxcolumbo 5 hours ago | parent | prev | next [-] |
| I don't envy you Boris. Getting flak from all sorts of places can't be easy. But thanks for keeping a direct line with us. I wish Anthropic's leadership would understand that the dev community is such a vital community that they should appreciate a bit more (i.e. not nice sending lawyers afters various devs without asking nicely first, banning accounts without notice, etc etc). Appreciate it's not easy to scale. OpenAI seems to be doing a much better job when it comes to developer relations, but I would like to see you guys 'win' since Anthropic shows more integrity and has clear ethical red lines they are not willing to cross unlike OpenAI's leadership. |
|
| ▲ | artdigital 2 hours ago | parent | prev | next [-] |
| I'm also a Claude Code user from day 1 here, back from when it wasn't included in the Pro/Max subscriptions yet, and I was absolutely not aware of this either. Your explanation makes sense, but I naively was also under the impression that re-using older existing conversations that I had open would just continue the conversation as is and not be a treated as a full cache miss. My biggest learning here is the 1 hour cache window. I often have multiple Claudes open and it happens frequently that they're idle for 1+ hours. This cache information should probably get displayed somewhere within Claude Code |
| |
| ▲ | bcherny 2 hours ago | parent [-] | | Yep, agree. We added a little "/clear to save XXX tokens" notice in the bottom right, and will keep iterating on this. Thanks for being an early user! | | |
| ▲ | Implicated 2 hours ago | parent [-] | | But.. that doesn't solve the problem of having no indication in-session when it'll lose the cache. A nudge to /clear does nothing to indicate "or else face significant cost" nor does it indicate "your cache is stale". Love the product. <3 |
|
|
|
| ▲ | kuboble 5 hours ago | parent | prev | next [-] |
| As some others have mentioned. I think the best option would be tell a user who is about to resurrect a conversation that has been evicted from cache that the session is not cached anymore and the user will have to face a full cost of replaying a session, not only the incremental question and answer. (In understand under the hood that llms are n^2 by default but it's very counter intuitive - and given how popular cc is becoming outside of nerd circles, probably smaller and smaller fraction of users is aware of it) I would like to decide on it case by case. Sometimes the session has some really deep insight I want to preserve, sometimes it's discardable. |
| |
| ▲ | a_t48 5 hours ago | parent | next [-] | | I got exactly this warning message yesterday, saying that it could use up a significant amount of my token budget if I resumed the conversation without compaction. | | |
| ▲ | onemoresoop 4 hours ago | parent | next [-] | | Im glad they chose to do that as opposed to hidden behavior changes that only confuse users more. | |
| ▲ | fhub 5 hours ago | parent | prev [-] | | Really good to know. That should have made it into their update letter in point (2). Empowering the user to choose is the right call. |
| |
| ▲ | skeledrew 4 hours ago | parent | prev [-] | | > I think the best option would be tell a user who is about to resurrect a conversation that has been evicted from cache that the session is not cached anymore and the user will have to face a full cost of replaying a session This feature has been live for a few days/weeks now, and with that knowledge I try remember to a least get a process report written when I'm for example close to the quota limit and the context is reasonably large. Or continue with a /compact, but that tends to lead to be having to repeat some things that didn't get included in the summary. Context management is just hard. | | |
| ▲ | Terretta 3 hours ago | parent [-] | | Right, and reloading that context is the same cost as refilling the cache, so really, they're charging the same, and making it hard. |
|
|
|
| ▲ | isaacdl 7 hours ago | parent | prev | next [-] |
| Thanks for giving more information. Just as a comment on (1), a lot of people don't use X/social. That's never going to be a sustainable path to "improve this UX" since it's...not part of the UX of the product. It's a little concerning that it's number 1 in your list. |
|
| ▲ | Terretta 3 hours ago | parent | prev | next [-] |
| This violates the principle of least surprise, with nothing to indicate Claude got lobotomized while it napped when so many use prior sessions as "primed context" (even if people don't know that's what they were doing or know why it works). The purpose of spending 10 to 50 prompts getting Claude to fill the context for you is it effectively "fine tunes" that session into a place your work product or questions are handled well. // If this notion of sufficient context as fine tune seems surprising, the research is out there.) Approaches tried need to deal with both of these: 1) Silent context degradation breaks the Pro-tool contract. I pay compute so I don't pay in my time; if you want to surface the cost, surface it (UI + price tag or choice), don't silently erode quality of outcomes. 2) The workaround (external context files re-primed on return) eats the exact same cache miss, so the "savings" are illusory — you just pushed the cost onto the user's time. If my own time's cheap enough that's the right trade off, I shouldn't be using your machine. |
|
| ▲ | fidrelity 7 hours ago | parent | prev | next [-] |
| Just wanted to say I appreciate your responses here. Engaging so directly with a highly critical audience is a minefield that you're navigating well. Thank you. |
| |
| ▲ | qsort 7 hours ago | parent | next [-] | | I agree with this. I'm writing this message even though I don't have much to add because it's often the case on HN that criticism is vocal and appreciation is silent and I'd like to balance out the sentiment. Anthropic has fumbled on many fronts lately but engaging honestly like this is the right thing to do. I trust you'll get back on track. | |
| ▲ | troupo 7 hours ago | parent | prev | next [-] | | > Engaging so directly with a highly critical audience is a minefield that you're navigating well. They spent two months literally gaslighting this "critical audience" that this could not be happening and literally blaming users for using their vibe-coded slop exactly as advertised. All the while all the official channels refused to acknowledge any problems. Now the dissatisfaction and subscription cancellations have reached a point where they finally had to do something. | | | |
| ▲ | shimman 7 hours ago | parent | prev [-] | | Very easy to do when you stand to make tens of millions when your employer IPOs. Let's not maybe give too much praise and employ some critical thinking here. | | |
| ▲ | simplify 7 hours ago | parent | next [-] | | What is the purpose of this mindset? Should we encourage typical corporate coldness instead? | | |
| ▲ | sdevonoes 6 hours ago | parent [-] | | We should encourage minimal dependency on multibillion tech companies like anthropic. They, and similar companies are just milking us… but since their toys are soo shiny, we don’t care | | |
| |
| ▲ | hgoel 6 hours ago | parent | prev [-] | | Is "employ some critical thinking" supposed to involve being an annoying uptight cynic? |
|
|
|
| ▲ | saadn92 6 hours ago | parent | prev | next [-] |
| I leave sessions idle for hours constantly - that's my primary workflow. If resuming a 900k context session eats my rate limit, fine, show me the cost and let me decide whether to /clear or push through. You already show a banner suggesting /clear at high context - just do the same thing here instead of silently lobotomizing the model. |
| |
| ▲ | sdevonoes 6 hours ago | parent [-] | | So if they fuck it up again and now they have, let’s say, “db problems” instead of “caching problems”, you would happily simply pay more? Wtf | | |
| ▲ | saadn92 6 hours ago | parent | next [-] | | No, I wouldn't. I'd like some transparency at least. | |
| ▲ | albedoa 5 hours ago | parent | prev [-] | | Did you reply to the wrong comment? I don't see that implied here at all. What? |
|
|
|
| ▲ | ceuk 7 hours ago | parent | prev | next [-] |
| Is having massive sessions which sit idle for hours (or days) at a time considered unusual? That's a really, really common scenario for me. Two questions if you see this: 1) if this isn't best practice, what is the best way to preserve highly specific contexts? 2) does this issue just affect idle sessions or would the cache miss also apply to /resume ? |
| |
| ▲ | hedgehog 6 hours ago | parent | next [-] | | Have the tool maintain a doc, and use either the built-in memory or (I prefer it this way) your own. I've been pretty critical of some other aspects of how Claude Code works but on this one I think they're doing roughly the right thing given how the underlying completion machinery works. Edit: If you message me I can share some of my toolchain, it's probably similar to what a lot of other people here use but I've done some polishing recently. | | | |
| ▲ | jetbalsa 6 hours ago | parent | prev [-] | | The cache is stored on Antropics servers, since its a save state of the LLM's weights at the time of processing. its several gigs in size. Every SINGLE TIME you send a message and its a cache miss you have to reprocess the entire message again eating up tons of tokens in the process | | |
| ▲ | cyanydeez 4 hours ago | parent [-] | | clarification though: the cache that's important to the GPU/NPU is loaded directly in the memory of the cards; it's not saved anywhere else. They could technically create cold storage of the tokens (vectors) and load that, but given how ephemeral all these viber coders are, it's unlikely there's any value in saving those vectors to load in. So then it comes to what you're talking about, which is processing the entire text chain which is a different kind of cache, and generating the equivelent tokens are what's being costed. But once you realize the efficiency of the product in extended sessions is cached in the immediate GPU hardware, then it's obvious that the oversold product can't just idle the GPU when sessions idle. |
|
|
|
| ▲ | mtilsted 6 hours ago | parent | prev | next [-] |
| Then you need to update your documentation and teach claude to read the new documentation because here is what claude code answered: Question: Hey claude, if we have a conversation, and then i take a break. Does it change the expected output of my next answer, if there are 2 hours between the previous message end the next one? Answer: No. A 2-hour gap doesn't change my output. I have no internal clock between messages — I only see the conversation content plus the currentDate context injected each turn. The prompt cache may expire (5 min TTL), which affects
cost/latency but not the response itself. The only things that can change output across a break: new context injected (like updated date), memory files being modified, or files on disk changing.
-- This answer directly contradict your post. It seems like the biggest problem is a total lack of documentation for expected behavior.A similar thing happens if I ask claude code for the difference between plan mode, and accept edits on. Then Claude told me the only difference was that with plan mode it would ask for permission before doing edits. But I really don't think this is true. It seems like plan mode does a lot more work, and present it in a total different way. It is not just a "I will ask before applying changes" mode. |
| |
| ▲ | ryeguy an hour ago | parent [-] | | This isn't how LLMs work. They aren't self aware like this, they're trained on the general internet. They might have some pointers to documentation for certain cases, but they generally aren't going to have specialized knowledge of themselves embedded within. Claude code has no need to know about its own internal programming, the core loop is just javascript code. | | |
| ▲ | CjHuber 21 minutes ago | parent [-] | | It does have an built in documentation subagent it can invoke but that doesn’t help much if they don’t document their shenanigans |
|
|
|
| ▲ | iidsample 7 hours ago | parent | prev | next [-] |
| We at UT-Austin have done some academic work to handle the same challenge. Will be curious if serving engines could modified. https://arxiv.org/abs/2412.16434 . The core idea is we can use user-activity at the client to manage KV cache loading and offloading. Happy to chat more!! |
|
| ▲ | kccqzy 3 hours ago | parent | prev | next [-] |
| This just does not match my workflow when I work on low-priority projects, especially personal projects when I do them for fun instead of being paid to do them. With life getting busy, I may only have half an hour each night with Claude to make some progress on it before having to pause and come back the next day. It’s just the nature of doing personal projects as a middle-aged person. The above workflow basically doesn’t hit the rate limit. So I’d appreciate a way to turn off this feature. |
|
| ▲ | ryanisnan 6 hours ago | parent | prev | next [-] |
| Why does the system work like that? Is the cache local, or on Claude's servers? Why not store the prompt cache to disk when it goes cold for a certain period of time, and then when a long-lived, cold conversation gets re-initiated, you can re-hydrate the cache from disk. Purge the cached prompts from disk after X days of inactivity, and tell users they cannot resume conversations over X days without burning budget. |
| |
| ▲ | jetbalsa 6 hours ago | parent [-] | | The cache is on Antropics server, its like a freeze frame of the LLM inner workings at the time. the LLM can pick up directly from this save state. as you can guess this save state has bits of the underlying model, their secret sauce. so it cannot be saved locally... | | |
| ▲ | dicethrowaway1 6 hours ago | parent [-] | | Maybe they could let users store an encrypted copy of the cache? Since the users wouldn't have Anthropic's keys, it wouldn't leak any information about the model (beyond perhaps its number of parameters judging by the size). | | |
| ▲ | jetbalsa 6 hours ago | parent | next [-] | | I'm unsure of the sizes needed for prompt cache, but I suspect its several gigs in size (A percentage of the model weight size), how would the user upload this every time they started a resumed a old idle session, also are they going to save /every/ session you do this with? | | |
| ▲ | skissane 5 hours ago | parent | next [-] | | They could let you nominate an S3 bucket (or Azure/GCP/etc equivalent). Instead of dropping data from the cache, they encrypt it and save it to the bucket; on a cache miss they check the bucket and try to reload from it. You pay for the bucket; you control the expiry time for it; if it costs too much you just turn it off. | |
| ▲ | im3w1l 6 hours ago | parent | prev [-] | | A few gigs of disk is not that expensive. Imo they should allocate every paying user (at least) one disk cache slot that doesn't expire after any time. Use it for their most recent long chat (a very short question-answer that could easily be replayed shouldn't evict a long convo). | | |
| ▲ | spunker540 20 minutes ago | parent [-] | | Whats lost on this thread is these caches are in very tight supply - they are literally on the GPUs running inference. the GPUs must load all the tokens in the conversation (expensive) and then continuing the conversation can leverage the GPU cache to avoid re-loading the full context up to that point. but obviously GPUs are in super tight supply, so if a thread has been dead for a while, they need to re-use the GPU for other customers. |
|
| |
| ▲ | northern-lights 4 hours ago | parent | prev [-] | | Encryption can only ensure the confidentiality of a message from a non-trusted third party but when that non-trusted third party happens to be your own machine hosting Claude Code, then it is pointless. You can always dump the keys (from your memory) that were used to encrypt/decrypt the message and use it to reconstruct the model weights (from the dump of your memory). | | |
| ▲ | dicethrowaway1 3 hours ago | parent [-] | | jetbalsa said that the cache is on Anthropic's server, so the encryption and decryption would be server-side. You'd never see the encryption key, Anthropic would just give you an encrypted dump of the cache that would otherwise live on its server, and then decrypt with their own key when you replay the copy. |
|
|
|
|
|
| ▲ | mandeepj 43 minutes ago | parent | prev | next [-] |
| > that would be >900k tokens written to cache all at once Probably that's why I hit my weekly limits 3-4 days ago, and was scheduled to rest later today. I just checked, and they are already reset. Not sure if it's already done, shouldn't there be a check somewhere to alert on if an outrageous number of tokens are getting written, then it's not right ? |
|
| ▲ | bobkb 6 hours ago | parent | prev | next [-] |
| Resuming sessions after more than 1 hour is a very common workflow that many teams are following. It will be great if this is considered as an expected behaviour and design the UX around it. Perhaps you are not realising the fact that Claude code has replaced the shells people were using (ie now bash is replaced with a Claude code session). |
| |
| ▲ | trinsic2 2 hours ago | parent [-] | | I think thats a bad idea. It seems like expecting to have a prompt open like this, accumulating context puts a load on the back end. Its one of those things that is a bad habit. Like trying to maintain open tabs in a browser as a way to keep your work flow up to date when what you really should be doing is taking notes of your process and working from there. I have project folders/files and memory stored for each session, when I come back to my projects the context is drawn from the memory files and the status that were saved in my project md files. Create a better workflow for your self and your teams and do it the right way. Quick expect the prompt to store everything for you. For the Claude team. If you havent already, I'd recommend you create some best practices for people that don't know any better, otherwise people are going to expect things to be a certain way and its going to cause a lot of friction when people cant do what the expect to be able to do. |
|
|
| ▲ | Joeri 6 hours ago | parent | prev | next [-] |
| This sounds like one of those problems where the solution is not a UX tweak but an architecture change. Perhaps prompt cache should be made long term resumable by storing it to disk before discarding from memory? |
| |
| ▲ | kivle 6 hours ago | parent | next [-] | | I agree.. Maybe parts of the cache contents are business secrets.. But then store a server side encrypted version on the users disk so that it can be resumed without wasting 900k tokens? | |
| ▲ | slashdave 4 hours ago | parent | prev [-] | | Disk where? LLM requests are routed dynamically. You might not even land in the same data center. | | |
| ▲ | FuckButtons 2 hours ago | parent [-] | | But if you have a tiered cache, then waiting several seconds / minutes is still preferable to getting a cache miss. I suspect the larger problem is the amount of tinkering they are doing with the model makes that not viable. |
|
|
|
| ▲ | 8note 5 hours ago | parent | prev | next [-] |
| reasonably, if i'm in an interactive session, its going to have breaks for an hour or more. whats driving the hour cache? shouldnt people be able to have lunch, then come back and continue? are you expecting claude code users to not attend meetings? I think product-wise you might need a better story on who uses claude-code, when and why. Same thing with session logs actually - i know folks who are definitely going to try to write a yearly RnD report and monthly timesheets based on text analysis of their claude code session files, and they're going to be incredibly unhappy when they find out its all been silently deleted |
| |
| ▲ | FuckButtons 2 hours ago | parent [-] | | As with everything Anthropic recently this is a supply constraint issue. They have not planned for scale adequately. |
|
|
| ▲ | toephu2 4 hours ago | parent | prev | next [-] |
| How does the Claude team recommend devs use Claude Code? 1) Is it okay to leave Claude Code CLI open for days? 2) Should we be using /clear more generously? e.g., on every single branch change, on every new convo? |
|
| ▲ | try-working 2 hours ago | parent | prev | next [-] |
| You created this issue by setting a timer for cache clearing. Time is really not a dimension that plays any role in how coding agent context is used. |
|
| ▲ | Confiks 25 minutes ago | parent | prev | next [-] |
| So you made this change completely invisible to the user, without the user being able to choose between the two behaviors, and without even documenting it in the (extremely verbose) changelog [1]? I can't find it, the Docs Assistant can't find it (well, it "I found it!" three times being fed your reply with a non-matching item). I frequently debug issues while keeping my carefully curated but long context active for days. Losing potentially very important context while in the middle of a debugging session resulting in less optimal answers, is costing me a lot more money than the cache misses would. In my eyes, Claude Code is mainly a context management tool. I build a foundation of apparent understanding of the problem domain, and then try to work towards a solution in a dialogue. Now you tell me Anthrophic has been silently breaking down that foundation without telling me, wasting potentially hours of my time. It's a clear reminder that these closed-source harnesses cannot be trusted (now or in the future), and I should find proper alternatives for Claude Code as soon as possible. [1] https://code.claude.com/docs/en/changelog |
|
| ▲ | dnnddidiej 4 hours ago | parent | prev | next [-] |
| It is too suprising. Time passed should not matter for using AI. Either swallow the cost or be transparent to the user and offer both options each time. |
|
| ▲ | useyourforce an hour ago | parent | prev | next [-] |
| I actually have a suggestion here - do not hide token count in non-verbose mode in Claude Code. |
|
| ▲ | BoppreH 4 hours ago | parent | prev | next [-] |
| Isn't that exactly what people had been accusing Anthropic of doing, silently making Claude dumber on purpose to cut costs? There should be, at minimum, a warning on the UI saying that parts of the context were removed due to inactivity. |
|
| ▲ | FuckButtons 2 hours ago | parent | prev | next [-] |
| From a utility perspective using a tiered cache with some much higher latency storage option for up to n hours would be very useful for me to prevent that l1 cache miss. |
|
| ▲ | ohcmon 6 hours ago | parent | prev | next [-] |
| Boris, wait, wait, wait, Why not use tired cache? Obviously storage is waaay cheaper than recalculation of embeddings all the way from the very beginning of the session. No matter how to put this explanation — it still sounds strange. Hell — you can even store the cache on the client if you must. Please, tell me I’m not understanding what is going on.. otherwise you really need to hire someone to look at this!) |
| |
| ▲ | krackers 5 hours ago | parent | next [-] | | Same question I had in https://news.ycombinator.com/item?id=47819914 I still don't understand it, yes it's a lot of data and presumably they're already shunting it to cpu ram instead of keeping it on precious vram, but they could go further and put it on SSD at which point it's no longer in the hotpath for their inference. | |
| ▲ | solarkraft 5 hours ago | parent | prev | next [-] | | I assume they are already storing the cache on flash storage instead of keeping it all in VRAM. KV caches are huge - that’s why it’s impractical to transfer to/from the client. It would also allow figuring out a lot about the underlying model, though I guess you could encrypt it. What would be an interesting option would be to let the user pay more for longer caching, but if the base length is 1 hour I assume that would become expensive very quickly. | | |
| ▲ | tonyarkles 5 hours ago | parent | next [-] | | Just to contextualize this... https://lmcache.ai/kv_cache_calculator.html. They only have smaller open models, but for Qwen3-32B with 50k tokens it's coming up with 7.62GB for the KV cache. Imagining a 900k session with, say, Opus, I think it'd be pretty unreasonable to flush that to the client after being idle for an hour. | |
| ▲ | 2001zhaozhao 3 hours ago | parent | prev | next [-] | | I wonder whether prompt caches would be the perfect use case of something like Optane. It's kept for long enough that it's expensive to store in RAM, but short enough that the writes are frequent and will wear down SSD storage | |
| ▲ | ohcmon 5 hours ago | parent | prev [-] | | Yes — encryption is the solution for client side caching. But even if it’s not — I can’t build a scenario in my head where recalculating it on real GPUs is cheaper/faster than retrieving it from some kind of slower cache tier |
| |
| ▲ | rkuska 6 hours ago | parent | prev [-] | | I don't think you can store the cache on client given the thinking is server side and you only get summaries in your client (even those are disabled by default). | | |
| ▲ | sargunv 5 hours ago | parent [-] | | If they really need to guard the thinking output, they could encrypt it and store it client side. Later it'd be sent back and decrypted on their server. But they used to return thinking output directly in the API, and that was _the_ reason I liked Claude over OpenAI's reasoning models. |
|
|
|
| ▲ | chris1993 2 hours ago | parent | prev | next [-] |
| So this explains why resuming a session after a 5-hour timeout basically eats most of the next session. How then to avoid this? |
|
| ▲ | the-grump 5 hours ago | parent | prev | next [-] |
| That is understandable, but the issue is the sudden drop in quality and the silent surge in token usage. It also seems like the warning should be in channel and not on X. If I wanted to find out how broken things are on X, I'd be a Grok user. |
|
| ▲ | infogulch 6 hours ago | parent | prev | next [-] |
| How big is the cache? Could you just evict the cache into cheap object storage and retrieve it when resuming? When the user starts the conversation back up show a "Resuming conversation... ⭕" spinner. |
|
| ▲ | arcza 3 hours ago | parent | prev | next [-] |
| You need to seriously look at your corporate communications and hire some adults to standarise your messaging, comms and signals. The volatility behind your doors is obvious to us and you'd impress us much more if you slowed down, took a moment to think about your customers and sent a consistent message. You lost huge trust with the A/B sham test. You lost trust with enshittification of the tokenizer on 4.6 to 4.7. Why not just say "hey, due to huge input prices in energy, GPU demand and compute constraints we've had to increase Pro from $20 to $30." You might lose 5% of customers. But the shady A/B thing and dodgy tokenizer increasing burn rate tells everyone inc. enterprise that you don't care about honesty and integrity in your product. I hope this feedback helps because you still stand to make an awesome product. Just show a little more professionalism. |
|
| ▲ | nextaccountic 6 hours ago | parent | prev | next [-] |
| what about selling long term cache space to users? or even, let the user control the cache expiry on a per request basis. with a /cache command that way they decide if they want to drop the cache right away , or extend it for 20 hours etc it would cost tokens even if the underlying resource is memory/SSD space, not compute |
|
| ▲ | troupo 7 hours ago | parent | prev | next [-] |
| > We tried a few different approaches to improve this UX:
1. Educating users on X/social No. You had random
developers tweet and reply at random times to random users while all of your official channels were completely silent. Including channels for people who are not terminally online on X |
| |
| ▲ | Terretta 3 hours ago | parent [-] | | There's a cultural divide between SV and the 85% of SMB using M365, for example. When everyone you know uses a thing, I mean, who doesn't?* There's a reason live service games have splash banners at every login. No matter what you pick as an official e-coms channel, most of your users aren't there! * To be fair, of all these firms, ANTHROP\C tries the hardest to remember, and deliver like, some people aren't the same. Starting with normals doing normals' jobs. |
|
|
| ▲ | gverrilla 7 hours ago | parent | prev | next [-] |
| I drop sessions very frequently to resume later - that's my main workflow with how slow Claude is. Is there anything I can do to not encounter this cache problem? |
|
| ▲ | growt 6 hours ago | parent | prev | next [-] |
| Wasn’t cache time reduced to 5 minutes? Or is that just some users interpretation of the bug? |
|
| ▲ | sockaddr 5 hours ago | parent | prev | next [-] |
| Sorry but I think this should be left up to the user to decide how it works and how they want to burn their tokens. Also a countdown timer is better than all of these other options you mention. |
|
| ▲ | kang 5 hours ago | parent | prev | next [-] |
| > tokens written to cache all at once, which would eat up a significant % of your rate limits Construction of context is not an llm pass - it shouldn't even count towards token usage. The word 'caching' itself says don't recompute me. Since the devs on HN (& the whole world) is buying what looks like nonsense to me - what am I missing? |
|
| ▲ | frumplestlatz 6 hours ago | parent | prev [-] |
| The entire reason I keep a long-lived session around is because the context is hard-won — in term of tokens and my time. Silently degrading intelligence ought to be something you never do, but especially not for use-cases like this. I’m looking back at my past few weeks of work and realizing that these few regressions literally wasted 10s of hours of my time, and hundreds of dollars in extra usage fees. I ran out of my entire weekly quota four days ago, and had to pause the personal project I was working on. I was running the exact same pipeline I’ve run repeatedly before, on the same models, and yet this time I somehow ate a week’s worth of quota in less than 24h. I spent $400 just to finish the pipeline pass that got stuck halfway through. I’m sorry to be harsh, but your engineering culture must change. There are some types of software you can yolo. This isn’t one of them. The downstream cost of stupid mistakes is way, way too high, and far too many entirely avoidable bugs — and poor design choices — are shipping to customers way too often. |
| |
| ▲ | deaux 3 hours ago | parent | next [-] | | > The entire reason I keep a long-lived session around is because the context is hard-won — in term of tokens and my time.
Silently degrading intelligence ought to be something you never do, but especially not for use-cases like this. Hard agree, would like to see a response to this. | |
| ▲ | 8note 5 hours ago | parent | prev | next [-] | | as a variation: how does this help me as a customer? if i have to redo the context from scratch, i will pay both the high token cost again, but also pay my own time to fill it. the cost of reloading the window didnt go away, it just went up even more | |
| ▲ | FireBeyond an hour ago | parent | prev [-] | | > I’m sorry to be harsh, but your engineering culture must change. There are some types of software you can yolo. This isn’t one of them. The downstream cost of stupid mistakes is way, way too high, and far too many entirely avoidable bugs — and poor design choices — are shipping to customers way too often. I have to imagine this isn't helped by working somewhere where you effectively have infinite tokens and usage of the product that people are paying for, sometimes a lot. |
|