| ▲ | crimsoneer 10 hours ago | |||||||
Not everything needs to be for everyone. I think this is super cool - I run a local transcription tool on my laptop, and the idea of miniaturising it is super cool. | ||||||||
| ▲ | buran77 9 hours ago | parent | next [-] | |||||||
> Not everything needs to be for everyone I wouldn't dare suggest that. The RPi was never for everyone yet it turned out it was for many. It was small but powerful for the size, it was low power, it was extremely flexible, it had great software support, and last but not least, it was dirt cheap. There was nothing like that on the market. They need to target a "minimum viable audience" with a unique value proposition otherwise they'll just Rube-Goldberg themselves into irrelevance. This hat is a convoluted way to change the parameters of an existing compromise and turn it into a different but equally difficult compromise. Worse performance, better efficiency, adds cost, and it doesn't differentiate itself from the competing Hailo-10H-based products that work with any system not just RPi (e.g. ASUS UGen300 USB AI Accelerator). > the idea of miniaturising If you aren't ditching the laptop you aren't miniaturizing, just splitting into discrete specialized components. | ||||||||
| ▲ | noodletheworld 10 hours ago | parent | prev [-] | |||||||
It is neat, and at 32GB it might be useful. Almost nothing useful runs in 8. This is the problem with this gen of “external AI boards” floating around. 8, 16, even 24 is not really enough to run much useful, and even then (ie. offloading to disk) they're so impractically slow. Forget running a serious foundation model, or any kind of realtime thing. The blunt reality is fast high memory GPU systems you actually need to self host are really really expensive. These devices are more optics and dreams (“itd be great if…”) than practical hacker toys. | ||||||||
| ||||||||