Remix.run Logo
keiferski a day ago

What's the serious counter-argument to the idea that a) AI will become more ubiquitous and inexpensive and b) economic/geopolitical success will be tied in some way to AI ability?

Because I do agree with him on that front. The question is whether the AI industry will end up like airplanes: massively useful technology that somehow isn't a great business to be in. If indeed that is the case, framing OpenAI as a nation-bound "human right" is certainly one way to ensure its organizational existence if the market becomes too competitive.

roxolotl a day ago | parent | next [-]

I think the most compelling arguments are:

LLMs aren’t AI. These are language processing tools which are highly effective and it turns out language is a large component of intelligence but they aren’t AI alone.

Intelligence isn’t the solution or bottleneck to solving the world’s most pressing problems. Famines are political. We know how to deploy clean energy.

Now that doesn’t quite answer your question but I think it says two things. First that the time horizon to real AI is still way longer than sama is currently considering. Second that AI won’t be as useful as many believe.

keiferski a day ago | parent [-]

Right, but if you just replace AI with LLM in my comment, I'm not sure it really changes. "Real AI" might not be necessary to the two things I wrote.

I agree that all of the predictions regarding AI are probably overblown if they're just LLMs. But that might not matter if we're just talking about geopolitics.

roxolotl a day ago | parent [-]

Yea that’s fair. And if there’s enough money behind something even if it’s not great it can still bend the whole world. I think with a lot of comments like yours people take them, at least I did, to be more slanted to actually be saying something like “what’s the argument against AI 2027”. Which isn’t fair and is why the hype can be so damaging to honest discourse.

So I cannot think of a good argument of a reason this isn’t going to change the world even if that does look more like the AI as a normal technology[0] argument or simply a slopapolocypse.

0: https://knightcolumbia.org/content/ai-as-normal-technology

beeflet a day ago | parent | prev [-]

Maybe AI will become more ubiquitous. But I predict LLMs will be capped by the amount of training data present in the wild.

MountDoom a day ago | parent | next [-]

Ubiquity doesn't depend on the AI getting much better as much as it depends on the computational cost going down (i.e., better hardware + software optimizations). When you can put a ChatGPT-class model locally on every desktop or phone, people will use it even if the accuracy or safety isn't quite there.

Just look at how people are using Grok on Twitter, or how they're pasting ChatGPT output to win online arguments, or how they're trusting Google AI snippets. This is only gonna escalate.

That said, this is probably not the future Sam Altman is talking about. His vision for the future must justify the sky-high valuations of OpenAI, and cheap ubiquity of this non-proprietary tech runs counter to that. So his "ubiquity" is some sort of special, qualified ubiquity that is 100% dependent on his company.

beeflet a day ago | parent [-]

>When you can put a ChatGPT-class model locally on every desktop or phone, people will use it even if the accuracy or safety isn't quite there.

Will they though?

>Just look at how people are using Grok on Twitter, or how they're pasting ChatGPT output to win online arguments, or how they're trusting Google AI snippets. This is only gonna escalate.

But will they though?

nradov a day ago | parent | prev | next [-]

That's why the competitive moat for frontier LLMs is access to proprietary training data. OpenAI and their competitors are paying fortunes to license private data sets, and in some cases even hiring human experts to write custom documents on specific topics as additional training data. This is how they hope to stay ahead of open-source alternatives.

kulahan a day ago | parent | prev | next [-]

I think it'll be slightly different - without clearly marking AI-generated content, it'll be effectively impossible to find new content that isn't sold to you in pristine packages already, and even that you just sorta have to trust.

Of course, you can't train LLMs on LLM-generated content.

bilbo0s a day ago | parent | prev [-]

I'm more worried that publicly available LLMs "will be capped by the amount of training data present in the wild". But private LLMs, available only to the wealthy and powerful, will have additional, more pristine and accurate, data sources made available to them for training.

Think about the legal field. The masses tend to use Google, whereas the wealthy and powerful all use LexisNexis. Who do you think has been winning in court?

kulahan a day ago | parent [-]

I don't think the masses are representing themselves in court... and even then, legal text is borderline obfuscation it's so poorly designed.