Remix.run Logo
AlexandrB 5 hours ago

How do we tell the doordash/uber playbook from the moviepass playbook? Because the latter would be awful to build your business on.

parliament32 5 hours ago | parent [-]

Moviepass (afaik) was an attempt at the exact same playbook, it just failed.

Anthropic will also fail when the competition is.. near-equivalent-capability DeepSeek/Qwen/Llama on a $1k GPU with a break-even of 5 months of subscription costs. The value is simply not there for what they would need to charge to become profitable.

gruez 4 hours ago | parent [-]

>when the competition is.. near-equivalent-capability DeepSeek/Qwen/Llama on a $1k GPU with a break-even of 5 months of subscription costs

Lol no. Chinese AIs are definitely not "near-equivalent-capability". The empirical proof is pretty obvious: how many people have you heard talking about using their codex/claude code subscription vs their z.ai or qwen subscription? Moreover even the Chinese models require epic amounts of GPUs to run the full version, eg. https://apxml.com/models/glm-51 needs 1515 GB to run, and that's with a measly 1024 token context. To get it to run on your "$1k GPU" you'd need to quantize it, making it even dumber.

parliament32 4 hours ago | parent [-]

Today, sure. But we already see diminishing returns with Claude releases, and we know the open models are closing the gap (~6 months behind according to the benchmarks). And when the pitch is "our models are 5% better but cost $200/mo.. also here's a mountain of restrictions" it just won't make sense anymore. Give it a year or two.

I could see the "avoid the hardship of running a local model for $20/mo" angle but Anthropic has shown they have little interest in those customers.

gruez 3 hours ago | parent [-]

>and we know the open models are closing the gap (~6 months behind according to the benchmarks).

Looking at just the benchmarks might be misleading: https://x.com/scaling01/status/2050616057191072161

parliament32 2 hours ago | parent [-]

Good article. But it concludes with "Open models may be only 4–5 months behind on coding-heavy, benchmark-visible tasks... the gap is likely much larger and closer to 8 months."

So, fine. In 2024, being 8 months behind was massive. In 2025, pretty big. This year.. I guess CC has improved a bit between October and now? How much do you think it'll matter a few more years down the line?

Even now.. I bet a non-trivial number of people would happily be 8 months behind just to avoid another rent-seeker. And this will only get worse over time, which makes it an unwinnable situation for Anthropic. Hence all the panicked flailing about with restricting tooling and trying to get something even resembling a moat.