| ▲ | kator 3 hours ago | |
Some users are moving to local models, I think, because they want to avoid the agent's cost, or they think it'll be more secure (not). The mac mini has unified memory and can dynamically allocate memory to the GPU by stealing from the general RAM pool so you can run large local LLMs without buying a massive (and expensive) GPU. | ||
| ▲ | ErneX 30 minutes ago | parent | next [-] | |
I think any of the decent open models that would be useful for this claw frency require way more ram than any Mac Mini you can possibly configure. The whole point of the Mini is that the agent can interact with all your Apple services like reminders, iMessage, iCloud. If you don’t need any just use whatever you already have or get a cheap VPS for example. | ||
| ▲ | duskdozer 29 minutes ago | parent | prev [-] | |
>they think it'll be more secure (not) for these types of tasks or LLMs in general? | ||