| ▲ | pvtmert 2 hours ago | |
Does one really need to _buy_ a completely new desktop hardware (ie. mac mini) to _run_ a simple request/response program? Excluding the fact that you can run LLMs via ollama or similar directly on the device, but that will not have a very good token/s speed as far as I can guess... | ||
| ▲ | ErneX 41 minutes ago | parent | next [-] | |
You don’t, but for those who would like the agent to interact with Apple provided services like reminders and iMessage it works for that. | ||
| ▲ | titanomachy 2 hours ago | parent | prev | next [-] | |
I’m pretty sure people are using them for local inference. Token rates can be acceptable if you max out the specs. If it was just the harness, they’d use a $20 raspberry pi instead. | ||
| ▲ | 2 hours ago | parent | prev [-] | |
| [deleted] | ||