| ▲ | adornKey 6 hours ago | |||||||
Once everybody has a decent amount of VRAM they can just run local AIs and the need to mess with Ad-laden search results will fizzle. So of course they are desperate to grab a new monopoly. People haven't realised yet, that local AIs are fast and produce good results - on pretty average hardware. If they don't manage to grab a new monopoly Google will be history. But it doesn't really need a nefarious plot for the price spikes. There is a serious lack of VRAM deployed out there. Filling that gap will take quite some time. Add to that the nefarious plot and the situation will most likely get even worse.... | ||||||||
| ▲ | mrob 6 hours ago | parent [-] | |||||||
LLM inference is mostly read only, so high-bandwidth flash looks like it could provide huge cost savings over VRAM. It's not yet in commercial products but there are working prototypes already. Previous HN discussion: | ||||||||
| ||||||||