▲ | keiferski a day ago | |||||||||||||||||||||||||||||||||||||||||||
What's the serious counter-argument to the idea that a) AI will become more ubiquitous and inexpensive and b) economic/geopolitical success will be tied in some way to AI ability? Because I do agree with him on that front. The question is whether the AI industry will end up like airplanes: massively useful technology that somehow isn't a great business to be in. If indeed that is the case, framing OpenAI as a nation-bound "human right" is certainly one way to ensure its organizational existence if the market becomes too competitive. | ||||||||||||||||||||||||||||||||||||||||||||
▲ | roxolotl a day ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||
I think the most compelling arguments are: LLMs aren’t AI. These are language processing tools which are highly effective and it turns out language is a large component of intelligence but they aren’t AI alone. Intelligence isn’t the solution or bottleneck to solving the world’s most pressing problems. Famines are political. We know how to deploy clean energy. Now that doesn’t quite answer your question but I think it says two things. First that the time horizon to real AI is still way longer than sama is currently considering. Second that AI won’t be as useful as many believe. | ||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||
▲ | beeflet a day ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||
Maybe AI will become more ubiquitous. But I predict LLMs will be capped by the amount of training data present in the wild. | ||||||||||||||||||||||||||||||||||||||||||||
|