Remix.run Logo
vintagedave 9 hours ago

Same. I stopped my Pro subscription yesterday after entering the week with 70% of my tokens used by Monday morning (on light, small weekend projects, things I had worked on in the past and barely noticed a dent in usage.) Support was... unhelpful.

It's been funny watching my own attitude to Anthropic change, from being an enthusiastic Claude user to pure frustration. But even that wasn't the trigger to leave, it was the attitude Support showed. I figure, if you mess up as badly as Anthropic has, you should at least show some effort towards your customers. Instead I just got a mass of standardised replies, even after the thread replied I'd be escalated to a human. Nothing can sour you on a company more. I'm forgiving to bugs, we've all been there, but really annoyed by indifference and unhelpful form replies with corporate uselessness.

So if 4.7 is here? I'd prefer they forget models and revert the harness to its January state. Even then, I've already moved to Codex as of a few days ago, and I won't be maintaining two subscriptions, it's a move. It has its own issues, it's clear, but I'm getting work done. That's more than I can say for Claude.

HauntingPin 3 minutes ago | parent | next [-]

I've given up on Claude after seeing the response quality degrade so much over the past two weeks, and now this? I've unsubscribed. I don't know why people are still giving this company money.

spyckie2 8 hours ago | parent | prev | next [-]

> It's been funny watching my own attitude to Anthropic change, from being an enthusiastic Claude user to pure frustration.

You were enthusiastic because it was a great product at an unsustainable price.

Its clear that Claude is now harnessing their model because giving access to their full model is too expensive for the $20/m that consumers have settled on as the price point they want to pay.

I wrote a more in depth analysis here, there's probably too much to meaningfully summarize in a comment: https://sustainableviews.substack.com/p/the-era-of-models-is...

rzk 4 hours ago | parent | next [-]

Off topic, but I really like the writing style on your blog. Do you have any advice for improving my own? In an older comment[1], you mentioned the craft of sharpening an idea to a very fine, meaningful, well-written point. Are there any books, or resources you’d recommend for honing that craft? Thanks in advance.

[1] https://news.ycombinator.com/item?id=44082994

spyckie2 2 hours ago | parent | next [-]

The thing that inspires my writing is that the best sentences are self evident. Meaning you declare it without evidence and it feels so intuitively right to most people. It resonates, either being their lived experience, or being the inevitable conclusion of a line of thinking.

Making a sentence like requires deeply understanding a problem space to the point where these sentences emerge, rather than any "craft" of writing.

So the craft is thinking through a topic, usually by writing about it, and then deleting everything you've written because you arrived at the self evident position, and then writing from the vantage point of that self evident statement.

I feel that writing is a personal craft and you must dig it out of yourself through the practice of it, rather than learn it from others. The usage of AI as a resource makes this much clearer to me. You must be confident in your own writing not because it is following best practices or techniques of others but because it is the best version of your own voice at the time of being written.

bergheim 2 hours ago | parent | prev [-]

Curious why you think that? Stuff like

> Yes, there is a relative scale level...

> Yes, having the smartest model will...

> yes Chinese AI companies have ...

yes yes yes, I didn't say anything, why write in a way that insinuates that I was thinking that?

I mean it doesn't come off as AI slop, so that's yay in 2026. But why do you think it is so good?

spyckie2 2 hours ago | parent [-]

haha it is poorly written, its one of my pieces with the fewest drafts, i just wrote it and clicked submit to get the thoughts out of my head.

I think he is referring to the art of refining an idea though, which I do have something to say on his comment.

adrian_b 7 hours ago | parent | prev | next [-]

I agree with what you what you have written, which is why I would never pay a subscription to an external AI provider.

I prefer to run inference on my own HW, with a harness that I control, so I can choose myself what compromise between speed and the quality of the results is appropriate for my needs.

When I have complete control, resulting in predictable performance, I can work more efficiently, even with slower HW and with somewhat inferior models, than when I am at the mercy of an external provider.

brightball 4 hours ago | parent [-]

What’s your setup?

adrian_b 3 hours ago | parent [-]

For now, the most suitable computer that I have for running LLMs is an Epyc server with 128 GB DRAM and 2 AMD GPUs with 16 GB of HBM memory each.

I have a few other computers with 64 GB DRAM each and with NVIDIA, Intel or AMD GPUs. Fortunately all that memory has been bought long ago, because today I could not afford to buy extra memory.

However, a very short time ago, i.e. the previous week, I have started to work at modifying llama.cpp to allow an optimized execution with weights stored in SSDs, e.g. by using a couple of PCIe 5.0 SSDs, in order to be able to use bigger models than those that can fit inside 128 GB, which is the limit to what I have tested until now.

By coincidence, this week there have been a few threads on HN that have reported similar work for running locally big models with weights stored in SSDs, so I believe that this will become more common in the near future.

The speeds previously achieved for running from SSDs hover around values from a token at a few seconds to a few tokens per second. While such speeds would be low for a chat application, they can be adequate for a coding assistant, if the improved code that is generated compensates the lower speed.

brightball 3 hours ago | parent [-]

Thank you for that, it's very interesting. I keep wanting to find time to try out a local only setup with an NVIDIA 4090 and 64gb of RAM. It seems like it may be time try it out.

vintagedave 3 hours ago | parent | prev | next [-]

My bad — I had Max, so more than $20. I can’t edit the comment any more. Can’t keep track of the names. I wonder when ‘pro’ started to mean ‘lowest tier’.

But your article is interesting. You think some of the degradation is because when I think I’m using Opus they’re giving me Sonnet invisibily?

spyckie2 2 hours ago | parent [-]

Hard to say, but the fact is the intelligence was there and now it's not.

Maybe they are giving Sonnet, or maybe a distilled Opus, or maybe Opus but with lower context, not quite sure but intelligence costs compute so less intelligence means cheaper compute.

joefourier 7 hours ago | parent | prev | next [-]

I used the $60/mo subscription and I bet most developers get access to AI agents via their company, and there was no difference. They should have reduced the rate limits, or offered a new model, anything except silently reduce the quality of their flagship product to reduce cost.

The cost of switching is too low for them to be able to get away with the standard enshittification playbook. It takes all of 5 minutes to get a Codex subscription and it works almost exactly the same, down to using the same commands for most actions.

brightball 4 hours ago | parent [-]

Thank goodness for capitalism for providing multiple competitors to multibillion dollar companies

colordrops 4 hours ago | parent | prev [-]

So instead of breaking shit they should have just increased their prices.

suzzer99 8 hours ago | parent | prev | next [-]

It seems like the big companies they're providing Mythos to are their only concern right now.

sethhochberg 7 hours ago | parent [-]

Corporate software in general is often chosen based on the value returned simply being "good enough" most of the time, because the actual product being purchased is good controls for security, compliance, etc.

A corporate purchaser is buying hundreds to thousands of Claude seats and doesn't care very much about percieved fluctuations in the model performance from release to release, they're invested in ties into their SSO and SIEM and every other internal system and have trained their employees and there's substantial cost to switching even in a rapidly moving industry.

Consumer end-users are much less loyal, by comparison.

boppo1 8 hours ago | parent | prev | next [-]

I havent been using my claude sub lately but I liked 4.6 three weeks ago. Did something change?

GenerocUsername 7 hours ago | parent | next [-]

2 weeks ago the rolling session usage plummeted to borderline unusable. I'd say I get a weekly output equivalent to 2 session windows before change.

conception 3 hours ago | parent | next [-]

https://marginlab.ai/trackers/claude-code/

Seems like there is evidence for that.

fooster 4 hours ago | parent | prev [-]

I didn't experience that at all. I know there are lots of rumblings around here about that, but I'm posting this to show this wasn't a universal experience.

UltraSane an hour ago | parent | prev [-]

Even just in chats with Opus 4.6 I noticed hitting limits so much faster.

dakolli 7 hours ago | parent | prev | next [-]

Its funny watching llm users act like gamblers. Every other week swearing by one model and cursing another, like a gambler who thinks a certain slot machine, or table is cold this week. These llm companies are literally building slot machine mechanics into their ui interfaces too, I don't think this phenomenon is a coincidence.

Stop using these dopamine brain poisoning machines, think for yourself, don't pay a billionaire for their thinking machine.

Majromax 5 hours ago | parent | next [-]

Don't confuse the many voices of a crowd with a single person's fickle view. If you can track an individual person or organization who changes their mind 'every other week' then more power to you, but unless you're performing that longitudinal study you are simply seeing differential levels of enthusiasm.

dakolli an hour ago | parent [-]

I get what you mean but they're all over twitter, its not random levels of enthusiasm, follow a few heavy llm users who tweet a lot and you'll see what I mean.

hk__2 4 hours ago | parent | prev [-]

> Stop using these dopamine brain poisoning machines, think for yourself, don't pay a billionaire for their thinking machine.

Yeah, and also stop using these things they call "computers", think for yourself, write your texts by hand, send letters to people. /s

brenoRibeiro706 9 hours ago | parent | prev [-]

[dead]