Remix.run Logo
tengada1 15 hours ago

I think my concern would be if the relationship between model intelligence and inference cost was altered very significantly. I sort of feel like we got lucky that AI isn't arbitrarily scalable in a single instance

(i.e. if you could run a single LLM on an entire datacenter and it just immediately becomes a super genius versus running it on the minimum viable hardware i.e. some form of quantization on a local machine.)

Obviously there's a sort of goldilocks zone / most appropriate substrate for an LLM to run on somewhere in between those two extremes (small cluster of tightly coupled flagship GPUs)

So luckily enough the economics appear to work out to make that at least conceptually viable for even private members of the public to afford access to the same order of magnitude of LLM intelligence. But we're already seeing some departure from that.

My concern would be if this curve was altered significantly by a new algorithmic approach beyond or instead of Transformerd such that someone with $200,000 to spare could achieve just like a completely categorically different quality of work, massively magnify their existing wealth advantage, because this would be a threat of the sort being discussed above, namely a pathway to a severe form of modern Feudalism.