Remix.run Logo
Greed 6 hours ago

If 40k is the barrier to entry for impressive, that doesn't really sell the usecase of local LLMs very well.

For the same price in API calls, you could fund AI driven development across a small team for quite a long while.

Whether that remains the case once those models are no longer subsidized, TBD. But as of today the comparison isn't even close.

jazzyjackson 5 hours ago | parent | next [-]

It’s what a small business might have paid for an onprem web server a couple of decades ago before clouds caught on. I figure if a legal or medical practice saw value in LLMs it wouldn’t be a big deal to shove 50k into a closet

Greed 3 hours ago | parent [-]

You would still have to do some pretty outstanding volume before that makes sense over choosing the "Enterprise" plan from OpenAI or Anthropic if data retention is the motivation.

Assuming, of course, that your legal team signs off on their assurance not to train on or store your data with said Enterprise plans.

LunaSea 2 hours ago | parent [-]

At least with the server you know what you are buying.

With Anthropic you're paying for "more tokens than the free plan" which has no meaning

spacedcowboy 3 hours ago | parent | prev | next [-]

It's not. I've got a single one of those 512GB machines and it's pretty damn impressive for a local model.

Greed 3 hours ago | parent [-]

Assuming you ran the gamut up from what you could fit on 32 or 64GB previously, how noticeable is the difference between models you can run on that vs. the 512GB you have now?

I've been working my way up from a 3090 system and I've been surprised by how underwhelming even the finetunes are for complex coding tasks, once you've worked with Opus. Does it get better? As in, noticeably and not just "hallucinates a few minutes later than usual"?

ttoinou 6 hours ago | parent | prev [-]

With M3 Max with 64GB of unified ram you can code with a local LLM, so the bar is much lower

Greed 3 hours ago | parent [-]

But why? Spending several thousand dollars to run sub-par models when the break-even point could still be years away seems bizarre for any real usecase where your goal is productivity over novelty. Anyone who has used Codex or Opus can attest that the difference between those and a locally available model like Qwen or Codestral is night and day.

To be clear, I totally get the idea of running local LLMs for toy reasons. But in a business context the sell on a stack of Mac Pros seems misguided at best.

0x457 2 hours ago | parent | next [-]

I started doing it to hedge myself for inevitable disappearance of cheap inference.

robotresearcher 2 hours ago | parent | prev | next [-]

Sometimes you can't push your working data to third party service, by law, by contract, or by preference.

nurettin an hour ago | parent | prev [-]

I ran the qwen 3.5 35b a3b q4 model locally on a ryzen server with 64k context window and 5-8 tokens a second.

It is the first local model I've tried which could reason properly. Similar to Gemini 2.5 or sonnet 3.5. I gave it some tools to call , asked claude to order it around, (download quotes, print charts, set up a gnome extension) even claude was sort of impressed that it could get the job done.

Point is, it is really close. It isn't opus 4.5 yet, but very promising given the size. Local is definitely getting there and even without GPUs.

But you're right, I see no reason to spend right now.