▲ | bigyabai 4 days ago | |
You make a good point, but you're still not refuting the original argument. The demand for high-power AI still exists, the products that Apple sells today do not even come close to meaningfully replacing that demand. If you own an iPhone, you're probably still using ChatGPT. Speaking to your PC gaming analogy, there are render farms for graphics - they're just used for CGI and non-realtime use cases. What there isn't a huge demand for is consumer-grade hardware at datacenter prices. Apple found this out the hard way shipping Xserve prematurely. | ||
▲ | evilduck 3 days ago | parent | next [-] | |
> Speaking to your PC gaming analogy, there are render farms for graphics - they're just used for CGI and non-realtime use cases. What there isn't a huge demand for is consumer-grade hardware at datacenter prices. Right, and that's despite the datacenter hardware being far more powerful and for most people cheaper to use per hour than the TCO of owning your own gaming rig. People still want to own their computer and want to eliminate network connectivity and latency being a factor even when it's generally a worse value prop. You don't see any potential parallels here with local vs hosted AI? Local models on consumer grade hardware far inferior to buildings full of GPUs can already competently do tool calling. They can already generate tok/sec far beyond reading speed. The hardware isn't serving 100s of requests in parallel. Again, it just doesn't seem far fetched to think that the public will sway away from paying for more subscription services for something that can basically run on what they already own. Hosted frontier models won't go away, they _are_ better at most things, but can all of these companies sustain themselves as businesses if they can't keep encroaching into new areas to seek rent? For the average ChatGPT user, local Apple Intelligence and Gemma 3n basically already have the skills and smarts required, they just need more VRAM, and access to RAG'd world knowledge and access to the network to keep up. | ||
▲ | pdimitar 3 days ago | parent | prev [-] | |
> The demand for high-power AI still exists, the products that Apple sells today do not even come close to meaningfully replacing that demand. Correct, though to me it seems that this comes at the price of narrowing the target audience (i.e. devs and very high-demanding analysis + production work). For almost everything else people just open a bookmarked ChatGPT / Gemini link and let it flow, no matter how erroneous it might be. The AI area is burning a lot of bridges and has done so for the last 1.5 - 2.0 years; they solidify the public's idea that they only peddle subscription income as hard as they can without providing more value. Somebody finally had the right idea some months ago: sub-agents. Took them a while, and it was obvious right from the start that just dumping 50 pages on your favorite LLM is never going to produce impressive results. I mean, sometimes it does but people do a really bad job at quickly detecting when it does not, and are slow to correct course and just burn through tokens and their own patience. Investors are gonna keep investor-ing, they will of course want the paywall and for there to be no open models at all. But happily the market and even general public perception are pushing back. I am really curious what will come out of all this. One prediction is local LLMs that secretly transmit to the mothership, so the work of the AI startup is partially offloaded to its users. But I am known to be very cynical, so take this with a spoonful of salt. |