Remix.run Logo
BloondAndDoom 2 hours ago

Can someone help me to understand why OpenAI and Anthropic talks as if the future of humanity controlled by them? We have very strong open (weight) Chinese models possibly only 6 months behind of them, gene is out of the bottle, is 6 months of difference really that important? And they don’t have good reasons for that 6 months to stay that way.

Am I missing something or are these just their usual marketing? I’m not arguing about importance of AI but trying to understand why OpenAI and Anthropic are so important?

hxycgd an hour ago | parent | next [-]

It is not about the US or the Chinese. Its about the "Elephant Rider" mind everyone has. Once the Elephant has been injured or scared what it does next is not easy to control, and what story the Rider makes up to maintain coherence becomes another layer of the deeper problem. If the story resonates more elephants get triggered. Social media/attention economy make it even more complex to calm things down.

Modern Corporations are a failed experiment because they dont think Elephant injuries and fears are something they have to worry about it. If you compare the curiculum of a business school to a seminary the difference in how they think about fear and anxiety at individual and group level and what to do about it is totally different. We are learning as unpredictability accelerates its very important to pay attention to hurt and repair mechanisms.

unleaded 2 hours ago | parent | prev | next [-]

It's a marketing strategy. If it's almost certainly conscious and capable of ending the world if it desired (even if it isn't), imagine how good it could be at building your dream SaaS!

Veedrac 3 minutes ago | parent | next [-]

It turns out there is literally no amount of being publicly right about a longshot bet sufficient for people to conclude you hold your beliefs because you think they are true.

EA-3167 2 hours ago | parent | prev [-]

Anthropic in particular does this masterfully, you’d think they’d invented Skynet by the way they hand-wring.

As always what matters are actions and evidence, not talk.

CreepGin an hour ago | parent | next [-]

When a model can tell funny jokes or write good poetry, that's when I'll be concerned.

hephaes7us 12 minutes ago | parent [-]

I mean, I'm sure they can tell you good jokes... they just won't be _new_ jokes.

username223 38 minutes ago | parent | prev | next [-]

I’ll believe Anthropic when they fire everyone making more than the cost of a few GPUs. Until then, it’s just marketing.

rl3 an hour ago | parent | prev [-]

>... you’d think they’d invented Skynet by the way they hand-wring.

Meanwhile, in reality: "Skynet, I'm not sure that line of thinking is correct. You should re-check the first part again before making any assumptions."

Skynet 4.6 Extended: "You're right, I should have caught that. Let me redo everything correctly this time."

scruple 3 minutes ago | parent | prev | next [-]

> Can someone help me to understand why OpenAI and Anthropic talks as if the future of humanity controlled by them?

He wants to build the AI that makes people's lives better. Okay. Did the people ask? Do they have a say? It's all very easy for a billionaire to say when it's just him and a couple of people in his cohort in the driver's seat.

Beyond that I'd like to simply know why he thinks any of this is his responsibility. It seems much more obvious to me that he simply found himself in the right place at the right time and is trying to seize it all for himself as if it's his to take.

ghshephard 2 hours ago | parent | prev | next [-]

Do any of the open weight models from smaller labs exist if they can't distill from the SoTA models that are throwing billions of dollars of compute into pretraining?

daniel_iversen 42 minutes ago | parent | next [-]

I’ve been wondering the same. And I think pretty much all the impressive small lab models were guilty of it, right? At least there is still larger players like DeepSeek and mistral to provide a bit of diversity in the market

username223 31 minutes ago | parent | prev [-]

Does it matter? The frontier models stole the whole internet, then the second-level models stole from them… It’s all theft.

qudat 15 minutes ago | parent [-]

Hard agree.

johnfn 2 hours ago | parent | prev | next [-]

Some people think there will be an exponential takeoff, which means that a 6 month lead effectively rounds up to infinity.

DoctorOetker an hour ago | parent | next [-]

Is this belief grounded on some kind of derivation, or just a prima facie belief?

If it is grounded on a logical derivation, where can one find such a derivation, and inspect its premises?

Jtsummers an hour ago | parent [-]

It's an old idea, "the singularity". The machines become smart enough to improve themselves, and each improvement results in shorter (or more significant) improvement cycles. This leads to an exponential growth rate.

It's been promised to be around the corner for decades.

https://en.wikipedia.org/wiki/Technological_singularity

username223 25 minutes ago | parent | prev [-]

Those are the people betting on a business model of “create Robot God and ask him for money.” Why pay attention to them?

isodev 2 hours ago | parent | prev | next [-]

> just their usual marketing

I think that’s a very common element for most US tech corps. Apple, Google, Microsoft, Meta, X etc - they’re all “making a dent in the universe”. It’s unfortunate when their employees and CEOs loose track of the line that separates marketing from reality

abletonlive 2 hours ago | parent [-]

This seems like copium. All of those companies have indeed made quite an impact on society, not just in the United States, worldwide.

WarmWash 36 minutes ago | parent | prev | next [-]

GLM 5.1, widely held up as the model at the heals, perhaps ever surpassing western models....

Gets 5% on ARC-AGI2 private set.

Chinese models are suspiciously good a benchmarks.

therealpygon 2 hours ago | parent | prev | next [-]

Especially when Google is in the far better position to come out ahead…imo.

Edit: so as not to simply spout an opinion, the reasoning I believe this is that Google has a real business already and were already deep into ML and AI research long before they had competitors — they just botched making it a product in the beginning. Anthropic and OpenAI meanwhile are paying hand over fist to subsidize user acquisition. Also, “Deepmind”. I don’t think much more needs to be said regarding that team, and Google has been working on AI since before either Altman or Amodei applied to go to college. They have a vast amount of researchers and resources, their own hardware and data centers (already, not “planned”) and it appears to be showing more recently (in my opinion).

cj 2 hours ago | parent | prev | next [-]

These kind of people have highly paid emoliyees surrounding them on all sides propping them up and very likely making it very easy for them to actually believe it.

It feels like they actually believe it, rather than just “marketing” and I don’t know which is worse.

tyleo 2 hours ago | parent | prev | next [-]

I suppose most just haven’t seen the Chinese models in practice. I haven’t. I was skeptical of AI coding until using Claude Code in February. I saw and I believed. I’ve only done that with Google, OpenAI, and Anthropic’s models so far.

tinyhouse 2 hours ago | parent | prev | next [-]

They own the best models and will probably keep owning the best models for a while. They have much more compute now and more data to keep improving their models on many tasks. Open source won't close the gap in 6 months. They are also trying to block other companies from distilling their models [0].

[0] https://www.anthropic.com/news/detecting-and-preventing-dist...

BloondAndDoom 2 hours ago | parent | next [-]

I need to check benchmarks on the models, I wonder what the benchmarks are saying in terms of how closely models tracking these frontiers. —on my mobile at the moment

When it downs compute power I assume you are referring to power to training and interference. Then is it more about training gap will get wider and wider ? Is that the assumption, I know there limited GPUs etc. But I’m having hard time to believe to the idea of China cannot catch up. Even if the gap is 12 months I’m struggling to see what that means in practice? Is that military advantage, economical, intelligence? It still doesn’t explain and whatever the advantage is, aren’t we supposed to see that advantage today? If so, where is it? What’s the massive advantage of USA because of OpenAI and Anthropic?

nothinkjustai 2 hours ago | parent | prev [-]

GLM 5.1 already closed the gap on Opus 4.6. Deepseek 4 could surpass it.

stavros 2 hours ago | parent | prev | next [-]

The Chinese models are distilled from GPT and Claude, so it's not like China would pull ahead if those companies went away for six months. They really are at the forefront of innovation right now, as much as I hate to think of the consequences of this (a single company owning a superintelligence is basically a nightmare scenario for me).

largbae 2 hours ago | parent | next [-]

Don't worry, if someone truly achieves superintelligence it won't be controlled by anyone for long.

chihuahua 2 hours ago | parent | next [-]

There will be a blinding flash which signals the superintelligence singularity. When the smoke clears, you'll see a 50-foot tall Altman/Borg hybrid. He is about to destroy humanity with his death ray. Suddenly, a 50-foot tall Musk/Borg hybrid appears out of nowhere, and stops Altman just in time. Then they work together to destroy all humans.

rl3 an hour ago | parent [-]

Seems our best hedge in that case is Levi Ackerman.

stavros 2 hours ago | parent | prev [-]

That's my other nightmare scenario :P

georgemcbay 2 hours ago | parent [-]

Just imagine how inexpensive paperclips will become, there is always a silver lining.

We will finally have achieved abundance.

stavros 2 hours ago | parent [-]

Not just abundance, we will have the maximum amount of paperclips possible.

isodev 2 hours ago | parent | prev [-]

I think that’s the realm of conspiracy theories. There are also not only Chinese alternatives- Mistral in Europe is doing pretty good in several categories they’ve opted to focus on.

This kind of reiterates the parent’s question I think - people are maybe too focused on the gpt/claude model and forget about all the other ways of using the tech.

stavros 2 hours ago | parent [-]

Is it? I thought it was pretty well established that open models were distilled from the proprietary, frontier ones. Maybe I'm wrong.

airstrike 2 hours ago | parent [-]

No, that is not well established at all, and generalizing all open models under that inaccurate umbrella doesn't really help anyone.

nthypes 2 hours ago | parent | prev | next [-]

I have the same feelings

kingkawn an hour ago | parent | prev | next [-]

6 months will be an impossible gap once the thing starts closed loop self improvement

georgemcbay an hour ago | parent [-]

An impossible gap in the race to... what exactly?

Unless the first real AGI AI kills us all to preemptively weed out its own competition (possible, but a bad business model, economically speaking) there is not any defined end-point, so in the long run what does it matter if the various factions pushing this stuff hit the closed loop self improvement point at different times...?

neya 2 hours ago | parent | prev | next [-]

Two words: Delusion and overconfidence.

"You're absolutely right!" Right after fucking up my entire codebase isn't anywhere near AGI, let alone "having the power to control it"

altern8 2 hours ago | parent [-]

That why I commit basically after every change made by AI

fooker an hour ago | parent | prev | next [-]

Reminds me of the silicon valley episode where every company repeated the phrase “making the world a better place”.

efficax an hour ago | parent | prev | next [-]

you have to talk that way if you’re going to raise 100 billion in venture capital. it’s the grift

georgemcbay 2 hours ago | parent | prev | next [-]

When you are raising many billions of dollars to build up your infrastructure, you don't have much choice but to project a belief that the eventual outcome will result in a situation where there will be a return on that money.

That said, I do agree with you that the moats are very shallow and any particular frontier AI lab is unlikely to "win the AI race" and capture enough value to be worth the amount of investment they are all currently burning.

2 hours ago | parent | prev [-]
[deleted]