Remix.run Logo
icedchai 2 days ago

Outside of YouTube influencers, I doubt many home users are buying a 512G RAM Mac Studio.

FireBeyond 2 days ago | parent | next [-]

I doubt many of them are, either.

When the 2019 Mac Pro came out, it was "amazing" how many still photography YouTubers all got launch day deliveries of the same BTO Mac Pro, with exactly the same spec:

18 core CPU, 384GB memory, Vega II Duo GPU and an 8TB SSD.

Or, more likely, Apple worked with them and made sure each of them had this Mac on launch day, while they waited for the model they actually ordered. Because they sure as hell didn't need an $18,000 computer for Lightroom.

lukeh 2 days ago | parent [-]

Still rocking a 2019 Mac Pro with 192GB RAM for audio work, because I need the slots and I can’t justify the expense of a new one. But I’m sure a M4 Mini is faster.

NSUserDefaults 2 days ago | parent [-]

How crazy do you have to get with # of tracks or plugins before it starts to struggle? I was under the impression that most studios would be fine with an Intel Mac Mini + external storage.

DrStartup 2 days ago | parent | prev | next [-]

I'm neither and have 2. 24/7 async inference against github issues. Free. (once you buy the macs that is)

madeofpalk 2 days ago | parent | next [-]

I'm not sure who 'home users' are, but i doubt they're buying two $9,499 computers.

trvz 2 days ago | parent [-]

Peanuts for people who make their living with computers.

jon-wood 2 days ago | parent | next [-]

So, not a home user then. If you make your living with computers in that manner you are by definition a professional, and just happen to have your work hardware at home.

selfhoster11 8 hours ago | parent | prev | next [-]

In the US, yes.

geezthatswhack a day ago | parent | prev [-]

[dead]

Waterluvian 2 days ago | parent | prev | next [-]

I wonder what the actual lifetime amortized cost will be.

oidar 2 days ago | parent [-]

Every time I'm tempted to get one of these beefy mac studios, I just calculate how much inference I can buy for that amount and it's never a good deal.

embedding-shape 2 days ago | parent | next [-]

Every time someone brings up that, it brings me back memories of trying to frantically finish stuff as quickly as possible as either my quota slowly go down with each API request, or the pay-as-you-go bill is increasing 0.1% for each request.

Nowadays I fire off async jobs that involve 1000s of requests, billion of tokens, yet it costs basically the same as if I didn't.

Maybe it takes a different type of person, than the one I am, but all these "pay-as-you-go"/tokens/credits platforms make me nervous to use, and I end up not using it or spending time trying to "optimize", while investing in hardware and infrastructure I can run at home and use that seems to be no problem for my head to just roll with.

noname120 2 days ago | parent [-]

But the downside is that you are stuck with inferior LLMs. None of the best models have open weights: Gemini 3.5, Claude Sonnet/Opus 4.5, ChatGPT 5.2. The best model with open weights performs an order of magniture worse than those.

embedding-shape 2 days ago | parent [-]

The best weights are the weights you can train yourself for specific use cases. As long as you have the data and the infrastructure to train/fine-tune your own small models, you'll get drastically better results.

And just because you're mostly using local models doesn't mean you can't use API hosted models in specific contexts. Of course, then the same dread sets in, but if you can do 90% of the tokens with local models and 10% with pay-per-usage API hosted models, you get the best of both worlds.

asimovDev 2 days ago | parent | prev | next [-]

anyone buying these is usually more concerned with just being able to run stuff on their own terms without handing their data off. otherwise it's probably always cheaper to rent compute for intense stuff like this

dontlaugh 2 days ago | parent | prev | next [-]

For now, while everything you can rent is sold at a loss.

stingraycharles 2 days ago | parent | prev | next [-]

Nevermind the fact that there are a lot of high quality (the highest quality?) models that are not released as open source.

bee_rider 2 days ago | parent | prev [-]

Are the inference providers profitable yet? Might be nice to be ready for the day when we see the real price of their services.

Nextgrid 2 days ago | parent [-]

Isn't it then even better to enjoy cheap inference thanks to techbro philanthropy while it lasts? You can always buy the hardware once the free money runs out.

bee_rider a day ago | parent [-]

Probably depends on what you are interested in. IMO, setting up local programs is more fun anyway. Plus, any project I’d do with LLMs would just be for fun and learning at this point, so I figure it is better to learn skills that will be useful in the long run.

icedchai 2 days ago | parent | prev | next [-]

Heh. I'm jealous. I'm still running a first gen Mac Studio (M1 Max, 64 gigs RAM.) It seemed like a beast only 3 years ago.

servercobra a day ago | parent | prev [-]

Interesting. Answering them? Solving them? Looking for ones to solve?

7e 2 days ago | parent | prev | next [-]

That product can still steal fab slots from cheaper, more prosumer products.

kridsdale1 2 days ago | parent | prev | next [-]

I did. Admittedly it was for video processing at 8k which uses more than 128gb of ram, but I am NOT a YouTuber.

mirekrusin 2 days ago | parent | prev [-]

Of course they're not. Everybody is waiting for next generation that will run LLMs faster to start buying.

rbanffy a day ago | parent [-]

Every generation runs LLMs faster than the previous one.