| ▲ | faitswulff 5 hours ago | ||||||||||||||||||||||||||||
The article makes no sense. I can't use OpenRouter as a general purpose computing device. Why are we comparing a whole computer to a single purpose SaaS? | |||||||||||||||||||||||||||||
| ▲ | mpyne 4 hours ago | parent | next [-] | ||||||||||||||||||||||||||||
They're responding to the people doing things like buying the most expensive Mac they can find specifically to do local inference for their AI agents. Some do it to have control over their ability to use AI. Some do it because they think it will be cheaper to not have to pay a SaaS to generate tokens for them. But for those interested in the latter case, it seems like it's not actually cheaper after all, at least at current prices. But then I don't expect prices to drastically jump because of how much competition there is in model development. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
| ▲ | sheepscreek 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
No, that’s not the point. I think this is to help people who are thinking about getting a beefier Mac so they can run their LLMs on it too. Some in particular want a dedicated Mac Mini or Studio for this purpose. The breakdown, even if slightly flawed, offers a good insight into the economics of it. For most people, they might be better off with OpenRouter models and providers supporting Zero Data Retention. On the cloud, that’s as good as it gets for privacy - your data is never retained beyond the life of the request. | |||||||||||||||||||||||||||||
| ▲ | tuwtuwtuwtuw 5 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||
I think it's because there are a lot of people writing articles about the benefits of running local models. I think it's fair to say that there are daily threads on HN singing the praises or local inference. I also see people buying new hardware where the main trigger is ability to run local models. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||