| ▲ | thih9 9 hours ago | |||||||
How much does it cost to run these? I see mentions of Claude and I assume all of these tools connect to a third party LLM api. I wish these could be run locally too. | ||||||||
| ▲ | kube-system 4 hours ago | parent | next [-] | |||||||
You can run openclaw locally against ollama if you want. But the models that are distilled/quantized enough to run on consumer hardware can have considerably poorer quality than full models. | ||||||||
| ||||||||
| ▲ | zozbot234 9 hours ago | parent | prev | next [-] | |||||||
You need very high-end hardware to run the largest SOTA open models at reasonable latency for real-time use. The minimum requirements are quite low, but then responses will be much slower and your agent won't be able to browse the web or use many external services. | ||||||||
| ▲ | hu3 9 hours ago | parent | prev [-] | |||||||
$3k Ryzen ai-max PCs with 128GB of unified ram is said to run this reasonably well. But don't quote me on it. | ||||||||