| ▲ | still-learning 3 hours ago | |
Why is there so much interest in local AI systems, am I missing something? Cloud providers have scale and expertise that would allow for much bigger throughput at lower costs. The small latency gains will be nice, but ChatGPT and Claude already come through blazingly fast via their API. | ||
| ▲ | dw_arthur 2 hours ago | parent | next [-] | |
LLMs are powerful systems that eventually may be a requirement for being able to economically participate in a large portion of the economy. For this and other reasons it's important that people are able to control their own LLM. Look at how much Google has changed over the years in the pursuit of profit. What will ChatGPT and Claude look like when they are pushed further down the profit maximization path? | ||
| ▲ | zihotki 3 hours ago | parent | prev | next [-] | |
1. Local models become more capable 2. you can easily fine-tune them 3. availability of certain cloud models and your access to them is something you can't control 4. privacy of your data | ||
| ▲ | threecheese 3 hours ago | parent | prev | next [-] | |
The product being evaluated is a home security camera agent, its user base is HomeAssistant-adjacent. Value here is privacy over latency (23tok/sec isn’t amazing for a vision model) | ||
| ▲ | gozucito 3 hours ago | parent | prev | next [-] | |
One word: privacy | ||
| ▲ | Wowfunhappy 3 hours ago | parent | prev | next [-] | |
I find it so incredibly freaking cool that the machine sitting next to me can generate code, images, and prose based on natural language prompts. It's cool that any computer can do that, of course, but it hits different when it's the one right here in my apartment versus a server off in the ether somewhere. It's the sort of thing I think about it in utter amazement as I fall asleep at night. I don't know if that's why other people are interested. I'm probably weird. But that's what drives my interest. | ||
| ▲ | 3 hours ago | parent | prev [-] | |
| [deleted] | ||