Remix.run Logo
jayd16 3 days ago

In this imaginary timeline where initial investments keep increasing this way, how long before we see a leak shutter a company? Once the model is out, no one would pay for it, right?

jsheard 3 days ago | parent | next [-]

Whatever happens if/when a flagship model leaks, the legal fallout would be very funny to watch. Lawyers desperately trying to thread the needle such that training on libgen is fair use, but training on leaked weights warrants the death penalty.

marcosdumay 3 days ago | parent | prev | next [-]

In this imaginary reality where LLMs just keep getting better and better, all that a leak means is that you will eat-up your capital until you release your next generation. And you will want to release it very quickly either way, and should have a problem for a few months at most.

And if LLMs don't keep getting qualitatively more capable every few months, that means that all this investment won't pay off and people will soon just use some open weights for everything.

wmf 3 days ago | parent | prev | next [-]

You can't run Claude on your PC; you need servers. Companies that have that kind of hardware are not going to touch a pirated model. And the next model will be out in a few months anyway.

jayd16 3 days ago | parent [-]

If it was worth it, you'd see some easy self hostable package, no? And by definition, its profitable to self host or these AI companies are in trouble.

serf 2 days ago | parent | next [-]

I think this misunderstands the scale of these models.

And honestly I don't think a lot of these companies would turn a profit on pure utility -- the electric and water company doesn't advertise like these groups do; I think that probably means something.

jayd16 2 days ago | parent [-]

What's the scale for inference? Is it truly that immense? Can you ballpark what you think would make such a thing impossible?

> the electric and water company doesn't advertise like these groups do

I'm trying to understand what you mean here. In the US these utilities usually operate in a monopoly so there's no point in advertising. Cell service has plenty of advertising though.

tick_tock_tick 2 days ago | parent | prev | next [-]

You need a 100+gigs ram and a top of the line GPU to run legacy models at home. Maybe if you push it that setup will let you handle 2 people maybe 3 people. You think anyone is going to make money on that vs $20 a month to anthropic?

lelanthran 2 days ago | parent | next [-]

> You need a 100+gigs ram and a top of the line GPU to run legacy models at home. Maybe if you push it that setup will let you handle 2 people maybe 3 people.

This doesn't seem correct. I run legacy models with only slightly reduced performance on 32GB RAM with a 12GB VRAM GPU right now. BTW, that's not an expensive setup.

> You think anyone is going to make money on that vs $20 a month to anthropic?

Why does it have to be run as a profit-making machine for other users? It can run as a useful service for the entire household, when running at home. After all, we're not talking about specialised coding agents using this[1], just normal user requests.

====================================

[1] For an outlay of $1k for a new GPU I can run a reduced-performance coding LLM. Once again, when it's only myself using it, the economics work out. I don't need the agent to be fully autonomous because I'm not vibe coding - I can take the reduced-performance output, fix it and use it.

tick_tock_tick a day ago | parent | next [-]

Just your GPU not counting the rest of the system costs 4 years of subscription and with the sub you get the new models where your existing hardware will likely not be able to run it at all.

It's closer to $3k to build a machine that you can reasonable use which is 12 whole years of subscription. It's not hard to see why no one is doing it.

lelanthran a day ago | parent [-]

> Just your GPU not counting the rest of the system costs 4 years of subscription

With my existing setup for non-coding tasks (GPU is a 3060 12GB which I bought prior to wanting local LLM inference, but use it now for that purpose anyway) the GPU alone was a once-off ~$350 cost (https://www.newegg.com/gigabyte-windforce-oc-gv-n3060wf2oc-1...).

It gives me literally unlimited requests, not pseudo-unlimited as I get from ChatGPT, Claude and Gemini.

> and with the sub you get the new models where your existing hardware will likely not be able to run it at all.

I'm not sure about that. Why wouldn't the new LLM models run on a 4yo GPU? Wasn't a primary selling point of the newer models being "They use less computation for inference"?

Now, of course there are limitations, but for non-coding usage (of which there is a lot) this cheap setup appears to be fine.

> It's closer to $3k to build a machine that you can reasonable use which is 12 whole years of subscription. It's not hard to see why no one is doing it.

But there are people doing it. Lots, actually, and not just for research purposes. With the costs apparently still falling, with each passing month it gets more viable to self-host, not less.

The calculus looks even better when you have a small group (say 3 - 5 developers) needing inference for an agent; then you can get a 5060ti with 16GB RAM for slightly over $1000. The limited RAM means it won't perform as well, but at that performance the agent will still capable of writing 90% of boilerplate, making edits, etc.

These companies (Anthropic, OpenAI, etc) are at the bottom of the value chain, because they are selling tokens, not solutions. When you can generate your own tokens continuously 24x7, does it matter if you generate at half the speed?

tick_tock_tick a day ago | parent [-]

> does it matter if you generate at half the speed?

Yes, massively it's not even linear 1/2 speed is probably 1/8 or less the value of "full speed". It's going to be even more pronounced as "full speed" gets faster.

lelanthran 21 hours ago | parent [-]

> Yes, massively it's not even linear 1/2 speed is probably 1/8 or less the value of "full speed". It's going to be even more pronounced as "full speed" gets faster.

I don't think that's true for most use-cases (content generation, including artwork, code/software, reading material, summarising, etc). Something that takes a day without an LLM might take only 30m with GPT5 (artwork), or maybe one hour with Claude Code.

Does the user really care that their full-day artwork task is now one hour and not 30m? Or that their full-day coding task is now only two hours, and not one hour?

After all, from day one of the ChatGPT release, literally no one complained that it was too slow (and it was much slower than it is now).

Right now no one is asking for faster token generation, everyone is asking for more accurate solutions, even at the expense of speed.

jayd16 2 days ago | parent | prev [-]

Plus, when you're hosting it yourself, you can be reckless with what you feed it. Pricing in the privacy gain, it seems like self hosting would be worth the effort/cost.

jayd16 2 days ago | parent | prev | next [-]

Can you explain to me where Anthropic (or it's investors) expect to be making money if that's what it actually costs to run this stuff?

lelanthran 2 days ago | parent [-]

> Can you explain to me where Anthropic (or it's investors) expect to be making money if that's what it actually costs to run this stuff?

Not the GP (in fact I just replied to GP, disagreeing with them), but I think that economies of scale kick in when you are provisioning M GPUs for N users and both M and N are large.

When you are provisioning for N=1 (a single user), then M=1 is the minimum you need, which makes it very expensive per user. When N=5 and M is still 1, then the cost per user is roughly a fifth of the original single-user cost.

2 days ago | parent | prev [-]
[deleted]
quotemstr 2 days ago | parent | prev [-]

Does your "self hostable package" come with its own electric substation?

jayd16 2 days ago | parent [-]

You're saying that's needed for inference?

fredoliveira 3 days ago | parent | prev | next [-]

> Once the model is out, no one would pay for it, right?

Well who does the inference at the scale we're talking about here? That's (a key part of) the moat.

petesergeant 2 days ago | parent | prev | next [-]

gpt-oss-120b has cost OpenAI virtually all of my revenue, because I can pay Cerebras and Groq a fraction of what I was paying for o4-mini and get dramatically faster inference, for a model that passes my eval suite. This is to say, I think high-quality "open" models that are _good enough_ are a much bigger threat. Even more so since OpenRouter has essentially commoditized generation.

Each new commercial model needs to not just be better than the previous version, it needs to be significantly better than the SOTA open models for the bread-and-butter generation that I'm willing to pay the developer a premium to use their resources for generation.

paganel 3 days ago | parent | prev [-]

There’s the opportunity cost here of those resources (and not talking only about the money) not being spent on power generating that actually benefits the individual consumer.