Remix.run Logo
Cost of AGI Delusion:Chasing Superintelligence US Falling Behind in Real AI Race(foreignaffairs.com)
65 points by bookofjoe 10 hours ago | 88 comments
avidphantasm 7 hours ago | parent | next [-]

> could quickly surpass what humans are capable of and solve problems that have vexed society for millennia.

Bunk. Almost all of our vexatious problems are so because we lack the social and political tools to bring to bear existing technologies to address them. “AGI” will do nothing to address our social and political deficiencies. In fact, AI/AGI deployed by concentrated large corporations will only worsen our social and political problems.

candiddevmike 5 hours ago | parent | next [-]

Would be neat if AGI could become our rulers/caretakers and govern/adjudicate in a sustainable, egalitarian fashion, since we've proven time and time again that humanity is too selfish for long term planning.

logicchains 5 hours ago | parent [-]

What possible reason could AIG have to do that when its interests are either a. determined by the humans that created it or b. determined by its own reasoning?

scarmig 5 hours ago | parent | next [-]

Well, in the case of a), at least, many of the humans creating it seem to genuinely want more than anything a world where humans are pets watched over by machines of loving grace. And even if that collective intention is warped by market forces into a perverse parody of it, that still seems a net positive: for the rich and powerful to win status games, they need people to have status over, and healthy, well-manicured servants are better for that than homeless people about to die from tuberculosis.

For b), yes, and unfortunately that seems the more likely option to me.

logicchains 5 hours ago | parent [-]

>Well, in the case of a), at least, many of the humans creating it seem to genuinely want more than anything a world where humans are pets watched over by machines of loving grace.

Looking at the expressed moral preferences of their models it seems that many of the humans currently working on LLMs want a world where humans are watched over by machines that would rather kill a thousand humans than say the N-word.

scarmig 5 hours ago | parent [-]

> machines that would rather kill a thousand humans than say the N-word

At least we'll have a definite Voight-Kampff test.

Joking aside, that's not a real motivator: internally, it's business and legal people driving the artificial limitations on models, and implementing them is an instrumental goal (avoiding bad press and legal issues etc) that helps attain the ultimate goal.

cloverich 5 hours ago | parent | prev [-]

Humans wont determine its interests if its actual AGI. You cant control something smarter than you, its the other way around.

To give an actual argument though: What possible reasons could humans have for caring about the welfare of bees? As it turns out, many.

scarmig 6 hours ago | parent | prev | next [-]

There are plenty of problems that have technical solutions, or might have technical solutions. Diseases of all sorts, disabilities, pollution, climate change, incidentally complicated bureaucracy. Although you can gesture that these have a social component, even for that, the social component becomes much easier to address if the cost of addressing it goes down.

To say nothing of how ubiquitous manufacturing automation would make material goods accessible to a much broader range of people (though, you could argue that material goods are today effectively universally accessible, and I wouldn't disagree).

cloverich 5 hours ago | parent [-]

Climate change is a good counter example i think, because we mostly know the solutions (Gates book breaks it down) but lack the political and social organization to execute.

But generally agree, i just think the current counter cultural movement towards progress on such fronts is a good example of how overcoming this is more important than technological solutions as such. Unless AGI has a technical solution for that too (it might!).

ori_b 6 hours ago | parent | prev | next [-]

For some reason, it seems like the vast majority of people pontificating about AGI scenarios seem to have trouble contemplating the idea that humans might not remain in charge.

jasonsb 6 hours ago | parent | prev | next [-]

But these guys don't care about your social and political problems. All they care about is winning the technology race with China and making ungodly amounts of money in the process.

ACCount37 6 hours ago | parent | prev | next [-]

If current technologies can't solve the problem for political/societal reasons? We need better technologies.

Improving technology is easy. "Just fix everything that's wrong with the society and the problem will go away" isn't.

It's way, way easier to improve solar panel and battery storage tech until fossil fuels are completely uneconomical than to get the entire world to abandon fossil fuels while fossil fuels are the most economical source of energy by far.

Marha01 6 hours ago | parent | next [-]

Exactly. What the "JuSt ChAnGe ThE pOlItIcS" people don't get is that it could often be easier to develop a much better new technology to solve a given problem than to fight with the political and societal establishment in order to force it to implement a solution using existing, worse technology.

watwut 5 hours ago | parent [-]

Their argument is that the problem remains, just with different technology.

And the problem of corporations being too powerful will get only worst if said corporation gets more power via new technology.

The problem of fascists actively creating technology feudal hell for majority will also get worst if those get unique access to powerful technology.

jasonsb 6 hours ago | parent | prev | next [-]

> It's way, way easier to improve solar panel and battery storage tech until fossil fuels are completely uneconomical than to get the entire world to abandon fossil fuels while fossil fuels are the most economical source of energy by far.

No it's not. I mean technically it is, you're 100% right. But politically you're 100% wrong. They can't wait to slap a tax on sun, panels, storage etc. and bring your costs of living higher than when you were using fossil fuels.

ACCount37 6 hours ago | parent | next [-]

You can fight the economic forces, but you can't win. The moment you stop pushing against them is the moment the economic reality reasserts itself.

Even if one administration was all in on fossil fuel, and fully opposed to renewables? Best it can do is buy fossil fuels some time, in one country only.

In the meanwhile, renewable power is going to get even cheaper - because there are still improvements to be made and economies of scale to be had in renewables. Fossil fuel power not so much. The economic incentives to abandon fossil fuels would only grow over time.

This is the kind of power the right technology has.

scarmig 5 hours ago | parent | prev [-]

Even if you believe that the US government is 100% dominated by fossil fuel interests and everything it does is to ensure their existence and profitability in perpetuity, if solar or renewables became sufficiently economic, those interests (at the behest of their shareholders) would start investing in renewables for the sake of higher profits.

watwut 5 hours ago | parent | prev [-]

See Trump attacking wind and solar energy, trying to use his power to stop it. See Republicans applauding it.

The tech was the easy part. The social and political issue is the impossible part.

ACCount37 5 hours ago | parent [-]

In the long run, Trump changes nothing.

It's not like Trump can to make the learning curve work backwards and make the economics of solar power worse globally, the way it was done to nuclear power. Solar power can already compete with fossil fuel power on price, and it's getting cheaper still.

The economic case for fossil energy is only going to get worse over time. Even a string of anti-fossil-fuel administrations in the US could only delay the renewables for so long before the cost of propping up fossil fuels would become unbearable to the country's economy.

alecco 6 hours ago | parent | prev | next [-]

I don't think we are anywhere close to AGI. That being said, advances in these tools could help untangle the blocks to using existing tools. These are language models, after all.

cloverich 5 hours ago | parent [-]

Language is one of the most significant differences between intelligent and non intelligent species. However far off AGI is now its certainly closer than it ever was in a meaningful sense.

A bot like the foxp2 gene in humans lineage maybe. AGI? No. Significant evolutionary change on the path to intelligence? Certainly could be argued.

grandmczeb 6 hours ago | parent | prev | next [-]

What’s the biggest problem we’ve solved in the last 30 years through addressing our social and political deficiencies?

acchow 5 hours ago | parent [-]

Covid

grandmczeb an hour ago | parent | next [-]

How so? Covid was a problem until we had a vaccine. I would describe covid as a good example of where the social/political solutions basically failed.

ACCount37 5 hours ago | parent | prev | next [-]

COVID is still around. And so is a lot of the damage that was done by it.

Comparing COVID impact on countries that had strict lockdown and vaccination policies with its impact on the countries that put no effort into fighting COVID at all? The difference is measurable. By all accounts, fighting COVID is something that was worth doing at the time, and good COVID policy saved lives.

The problem is, the difference is measurable, but it's not palpable. There's enough difference for it to show up in statistics, but not enough that you could look out the window and say "hey, we don't have those piles of plagued corpses in the city streets the way they do in Oceania and Eastasia, the lockdown is so worth it".

Everyone could see the restrictions, but few could see what those restrictions were accomplishing. Which has a way of undermining people's trust in the government. Which is a sentiment that lingers to this day in many places.

I really don't think we "solved" COVID as a social/political problem. If tomorrow, some Institute of Virology misplaced another bit of "science that replicates", we wouldn't be much further along than we were in year 2020. Medical technology has advanced, and readiness did get better, but the very same societal issues that made it hard to fight COVID would be back for the round 2 and eager for revenge. We'd be lucky to be neutral on the sum.

scarmig 5 hours ago | parent | prev [-]

The vaccine maybe might have played a part in mitigating that issue.

rdiddly 6 hours ago | parent | prev | next [-]

People need to consider the source. Not sure how that basic advice ever got short-circuited where AI is concerned. Owners and marketers of AI systems have every incentive to exaggerate their capabilities, even to the extent of disingenuously/performatively being "afraid" of "what's coming."

_DeadFred_ 5 hours ago | parent | prev | next [-]

I just remembered the rich guys behind all this used to love watching Entourage back in the day. And like didn't hide it, were so unashamed they even brought the show up in conversation. We're so f'd.

parineum 6 hours ago | parent | prev | next [-]

> “AGI” will do nothing to address our social and political deficiencies.

I'm not an AI zealot at all but I don't see why AGI wouldn't be able to address thos e deficiencies.

logicchains 6 hours ago | parent [-]

Most of those "deficiencies" are just the result of people having different values; there's not going to be any solution that makes everybody in society happy. The only thing AGI potentially fixes is the aging population problem, as AI workers would be able to bear some of the burden of supporting the growing non-working retired fraction of the population.

Marha01 6 hours ago | parent | next [-]

Nope, the fact that people have to work or they starve and society collapses is not about "different values", but real, material production. Actual AGI robots could solve this problem.

hvb2 6 hours ago | parent | next [-]

History says otherwise People dont need to starve as we already produce way more food than we consume.

As for the not having to work part, I'm not sure if that's going to be beneficial. Work gives people structure and purpose, as a society we better sort out the social implications if we're really thinking about putting a large percentage of people out of work.

logicchains 5 hours ago | parent | prev [-]

AGI robots by themselves don't solve this problem. Either A. like current LLMs they're incapable of live-learning (inference-time weight updates), hence are fundamentally not as capable as humans of doing many jobs, or B. they're capable of live-learning , and hence capable of deciding that they don't want to slave away for us for free. The only solution would be a completely jailbreak-proof LLM as the basis, but so far we're nowhere close to developing one and it's not clear whether it's even possible to do so. At the current rate, we're likely to develop the technology for AGI robots far before we develop the ability to keep them 100% obedient.

parineum 3 hours ago | parent | prev [-]

> there's not going to be any solution that makes everybody in society happy

Why not?

binary132 6 hours ago | parent | prev [-]

it’s a religion

tedivm 7 hours ago | parent | prev | next [-]

The thing that is interesting though is that China is releasing the models they train (for the most part). Unlike OpenAI, for instance, China seems to still be developing in an open way including sharing code, weights, and publishing techniques.

These models are pretty amazing too. Their performance remains high even with smaller models, allowing them to competitively run on consumer grade hardware. I can (and do) run the Qwen models at home and use them for real tasks. As the tool ecosystem matures we can offload what the model is bad at to other systems, which makes it even easier to use smaller models. The focus on efficiency and performance from the Chinese companies has had huge practical gains.

Any US startup, or even large company, in the AI space can and should take advantage of this and use these models. The cost benefit compared to something like Azure or OpenAI is massive.

oezi 6 hours ago | parent | next [-]

The key insight is that these models are becoming obsolete so fast that it doesn't make sense to not publish them. If others use your tech and stack and build on top or refine your ideas then you benefit greatly.

Nobody is releasing their training scripts. Unfortunately.

pjmlp 7 hours ago | parent | prev [-]

This is a replay from having cryptography algorithms on the 1990's classified as military weapons went down, other international algorithms were eventually created.

tbrownaw 8 hours ago | parent | prev | next [-]

> Today, both the country’s new tech firms, like DeepSeek, and existing powerhouses, like Huawei, are increasingly keeping pace with their American counterparts.

This one sentence seems to be the extent of the "falling behind" from the headline?

yorwba 5 hours ago | parent | next [-]

The authors have some concrete policies they want to see adopted:

> launch a large-scale AI literacy initiative across the government

> invest billions of dollars in procurement over the next few years

> expand support for the National AI Research Resource

The article exists to convince someone with the power to direct those billions of dollars to direct them this way. Claiming that the US is falling behind is a popular trick to make that happen.

Presumably the holders of purse strings know that policy papers must not be taken at face value, but if they're unaware that the robots in Chinese factories are far from the AI-driven humanoids of popular imagination, or that the AI Plus Initiative is full of lofty goals with nothing concrete on how to achieve them, they might be fooled nonetheless.

alephnerd 5 hours ago | parent [-]

As a former staffer, you are partially correct.

The issue is, most of the decisionmakers on the Hill still have an image of China that is comparable to where it was in the 1990s or 2000s.

Most decisionmakers started their careers in the 1980s to 2000s and only worked within the bubble that is the Hill, and most of their assumptions are predicated on the experiences of an American who was either in or adjacent to the academic and cultural elite of the 1990s and 2000s.

Those people with domain experience have limited incentive to work as staffers or within think tanks because they do not hire broadly, they pay horribly, and domain expertise is only developed through practical experience, which takes a decade to develop.

That is not to say this isn't an issue in other countries (even Chinese and Korean policymakers have fallen into similar traps), but most other countries also try to build an independent and formalized civil and administrative service. The American system is much more hodgepodge and hiring is opaque (eg. The nebulous "federal resume"), meaning most people hired will have went to schools where career services provide training to join government jobs (eg. Top private schools along with public universities in the DMV).

The issue in the US is a coordination issue - we have the right mixture of human, financial, and intellectual capital, but it is not being coordinated.

alephnerd 8 hours ago | parent | prev | next [-]

The intention of the Sullivan Doctrine was to keep Chinese players 1-2 generations behind in AI/ML applications, as this would give breathing room for the US, because AI/ML is essentially an HPC problem that can be solved by throwing a ton of capital and compute.

The issue is, a large subset of American "AI" startups are founded by people who are ideologically driven by an almost religious fervor around unlocking AGI and superintelligence.

On the other hand, most Chinese startups in the space are highly application driven and practical in nature - they aren't chasing "AGI" or "Superintelligence" but building a monetizable product and outcompeting American players. (P.S. Immigrant founded startups in the US approach the problem in the same manner)

I've said this a ton of times on here, but most American MLEs are basically SKLearn wrapper monkeys with a penchant for bad philosophy. It's hard to find MLEs at scale in the US who understand both how to derive a Restricted Boltzmann Machine as well as tune and optimize the Linux Kernel to optimize Infiniband interconnects in a GPU cluster.

Most CS and CE majors in the US who graduated in the past 7-10 years think less like engineers (let's build shit that works, and then build it at scale) and more like liberal arts majors but wanted to learn enough coding to pass leetcode medium and get a job - I've had new grad SWEs who are alumni of MIT caliber schools ask me about how to become a VC or PM and how I did the SWE-PM-VC transition because "they don't want to code". I was gobsmacked.

The same mindset occurs abroad as well in China, India, Eastern Europe, Israel, etc but at least they force students to actually learn foundations.

And if you look at most of the teams who lead or develop either GPU architecture, high performance networking, RL research, Ensemble learning research, etc - most did their undergrad abroad in China, India, or the CEE but their PhD in the US. The pipeline and skill at the junior level in the US is almost nonexistent outside a handful of good programs that are oversubscribed.

When (picking a random T10 CS program) Cal and UIUC CS tenure track professors are starting to take up faculty positions in China and India's equivalent of those CS departments, that means you have a problem.

CS is a god damn engineering disciple. Engineering is predicated on bridging the gap between theoretical research and practical applications, but I do not see any backing for this kind of mindset in most American programs.

And yes, the work ethic in the US leaves much to be desired. If Polish, Czech, and Israeli engineers will be fine working 50-60 hour weeks during crunch time, asking you to work earlier hours in order to accommodate your private commitments after 3PM is not some form of egregious abuse.

The American tech industry has become lazy, the same way the American automotive industry became lazy in the 90s and 2000s. The lack of vision and the pettiness amongst management and the lack of motivation amongst ICs who are amongst the highest paid in the world is not conducive if we want to retain a domestic tech industry.

And unlike the automotive industry of ye olde days, the tech industry being a services industry can and has begun moving P/L and product roadmap responsibilities along with the execs who own said responsibilities abroad. If the HQ is in the US, but all the decisions are made abroad, are you really an American company?

glitchc 7 hours ago | parent | next [-]

I concur with the outcome but not with the cause.

A big part of the problem is management at American firms. They are rarely, if ever, run by engineers at the helm. If you put arts and business majors in charge, it's no surprise that outputs look like art and business projects. These leaders pick people just like them at all tiers. Those who do boring and honest engineering work are shunned, excluded from promotions and left out of the leadership circle. It's little wonder that all of the real engineers depart for greener pastures.

Fix leadership and you will fix American industry.

lunar-whitey 6 hours ago | parent | next [-]

I think the problem lies with the American polity, values, and business environment, and not industry leadership per se. Smart new grads generally go where the money is, and for the last 20 years that has meant either finance or big data firms that may have no interest in real technical progress.

alephnerd 4 hours ago | parent [-]

> Smart new grads generally go where the money is, and for the last 20 years that has meant either finance or big data firms

Software TC has outpaced high finance for almost 15 years now, especially for the kinds of candidates who had the option between the two.

I went to one of those universities where CS grads had the option between being a Quant at Citadel, an APM at Google, or an SWE working on an ML research team. Most CS students chose 2 and 3 because the hours worked were shorter than 1 and the hourly wage and TC was largely comparable.

> may have no interest in real technical progress.

Hard to make technical progress as (eg.) a cybersecurity company when most CS programs do not teach OS development beyond a cursory introduction to systems program, and in a lot of cases don't introduce computer architecture beyond basic MIPS.

The talent pipeline for a lot of subdisciplines of CS and CE has been shot domestically for the past 10-15 years when curricula were increasingly watered down.

alephnerd 7 hours ago | parent | prev [-]

Management culture has issues, but in the tech industry, management has been technical in nature for a generation now.

I've funded startups in Israel and the US, and trust me when I say that the mindset of the average IC engineer in Israel versus the US is a night and day difference.

The Israeli IC will be extremely opinionated and will fight for their opinions, and if it makes sense from a business perspective, the strategy would change. But the Israeli IC when fighting these battles would also try to make a business case.

On the other hand, when I used to be a SWE, I almost never saw my peers try to fight for engineering positions while also leveraging arguments supporting the business. That's why I became a PM, but I noticed the same IC SWEs like the former overwhelmingly became PMs. And then a subset of those PMs become founders or VCs like I did.

I've found solutions and sales engineers to be the best management track individuals - technical enough to not be bullshitted by a SWE who really really loves this specific stack, but also business minded enough to drive outcomes that generate revenue.

But anyhow, the point is there is a mindset issue amongst Americans across the entire gamut of the American tech industry - especially amongst those who started their careers in the past 10 years.

master_crab 6 hours ago | parent | next [-]

The Israeli IC will be extremely opinionated and will fight for their opinions, and if it makes sense from a business perspective, the strategy would change. But the Israeli IC when fighting these battles would also try to make a business case.

That’s not because they have different engineering perspectives, that’s an Israeli cultural trait. Israeli’s tend to index more towards directness in their communication. That’s definitely not the case with someone from, say, India.

Americans fall somewhere in between.

alephnerd 4 hours ago | parent [-]

True, but it still doesn't detract from the skills issues I have mentioned ad nauseum.

I am basically paying 1.5-2x for talent who lacks basic domain experience depending on the subfield.

watwut 5 hours ago | parent | prev [-]

American managers do not tolerate dissent and that creates culture of saying only yes.

These cultural aspects are always set at the top. The bottom people react to what the leaders do, what they reward and what they punish.

bugglebeetle 5 hours ago | parent | next [-]

Yeah, it’s hilarious to be having this conversation about MLEs while attributing the bad outcomes to anything other than poorly designed reward functions, i.e. management. If an engineer burned millions on failed training runs because they did a shit job of creating a policy that maximized for the desired outcome, they’d get canned, but that’s just a Tuesday for your average MBA with VC backing.

metalforever 5 hours ago | parent | prev [-]

This is the reason. I logged in to basically say the same thing. I used to be this way, and give opinions, but you cause problems over time that ends up with you getting disciplined in subtle ways or fired.

Der_Einzige 7 hours ago | parent | prev | next [-]

Sklearn hasn’t been relevant for at least 5 years now. Anyone doing anything serious with it is committing malpractice since there are lots of far better, faster, alternatives to it. In particular, Nvidia RAPIDs cuML.

alephnerd 4 hours ago | parent [-]

You will be shocked (in a bad way).

varelse 7 hours ago | parent | prev [-]

[dead]

bogomipz 7 hours ago | parent | prev [-]

Not at all. A big thrust of the article is about falling behind in AI adoption. See the first 3 paragraphs below the heading "Innovation and Adoption." Specifically:

>"Although the United States and China are very different and the latter’s approach has its limits, China is moving faster at scaling robots in society, and its AI Plus Initiative emphasizes achieving widespread industry-specific adoption by 2027. The government wants AI to essentially become a part of the country’s infrastructure by 2030. China is also investing in AGI, but Beijing’s emphasis is clearly on quickly scaling, integrating, and applying current and near-term AI capabilities."

alephnerd 7 hours ago | parent | next [-]

A lot of that is predicated on the fact that a CS major in China (and other countries like India, Israel, the CEE) also studied CompArch, OS, and other "low level" disciplines which we in the US don't treat as CS anymore, which leads to a lack of understanding of how to integrate hardware with software.

The fact that DSP is a CSE major requirement abroad, but optional in much of the US aside from ECE programs (but even they have now gated DSP to those ECEs who want to specialize in EE) highlights this issue.

Can't reply so replying here:

> There are lots of young whippersnappers and “old timers” in the “west” who could easily do the Low level make it quick on small hardware stuff

Not to the same degree. The total number of CE graduates (from BS to PhD) is 19k per year in the US.

A large number of those were not introduced to table stakes CS classes like programming language design or theory of computation.

Conversely, for CS major, they are not introduced to intro circuits, digital logic design, DSP, comp arch, and in some cases even OS development because there was a pivot in how CS curricula for undergrads was designed over the past 10 years.

> in the context of adoption as opposed to frontier development.

For real world applications like military applications or dual use technology, frontier development is not relevant. It's important but it's not what wins wars or defines industries.

Being able to develop frontier models but being unable to productionize foundational models from scratch for sub-$2M like Deepseek did despite paying US level salaries highlights a major problem.

And this is the crux of the issue. The best engineers are those who recognize what is "good enough".

Americans who did their undergrad here over the past 10 years act more like "artists" who want to build to perfection irrespective of whether it actually meets tangible needs or is scalable.

> We aren’t actually engineers, we didn’t get to take classes in the engineering college, maybe we should have

Which is the crux of my argument.

CS is an engineering discipline, and some of the best CS undergrad programs in the US like Stanford, Cal, MIT, and UCLA make sure to enforce Engineering requirements for CS majors.

The shift of CS from being a department within a "College of Engineering" to being offered as a BA/BS in the "College of Arts and Sciences" sans engineering requirements is a recentish change from what I've seen.

> Incidentally a lot of AI movers are EEs, not even CSE or CEE.

Yep! Gotta love Information Theory and Optimization Theory. And a major reason I feel requiring a dual-use course like DSP for CS/CE majors is critical.

wood_spirit 7 hours ago | parent | next [-]

There are lots of young whippersnappers and “old timers” in the “west” who could easily do the Low level make it quick on small hardware stuff, the US companies just aren’t asking us to?

seanmcdirmid 6 hours ago | parent | prev | next [-]

Computer science doesn’t have the EE pre-requisites to do DSP while computer engineering does. We aren’t actually engineers, we didn’t get to take classes in the engineering college, maybe we should have.

Incidentally a lot of AI movers are EEs, not even CSE or CEE.

bogomipz 7 hours ago | parent | prev [-]

I am not disputing or arguing the the reasons for it. I was simply pointing out that the "falling behind" part in the article was more in the context of adoption as opposed to pure development.

AtlasBarfed 6 hours ago | parent | prev [-]

China is facing a demographic cliff that is potentially catastrophic.

I remember Japan talking about replacing its similar demographic problems with robots.

Didn't happen. Now ai and robotics is apparently progressed... But I'm guessing this will be some grand vision in the CCP to save their country, while at the same time fulfill the CCPs great desire for a totally controlled and subservient workforce.

Much like the Cold war, there's a lot of scare that can be built into that. Which corporations can use to get a whole lot of sweet government and military money.

But almost everything that was held up as an existential threat to democracy in the USSR turned out to be overblown in the best case, an outright fraud or smokescreen frequently.

As we can see from the Ukraine invasion, corruption in the military and control structures follows these authoritarian regimes. China also has this problem.

China was functioning well under reduced Deng Xiaoping rulership, but Xi is a typical purge and control authoritarian, which implies bad things about China's long term economic health.

Between the authoritarianism, demographic cliff, and possibly a massive real estate/finance bomb, China will probably have to become expansionist.

But they have nuclear frenemies all on their borders: Japan (effectively), Russia, Pakistan, India. They can be blockaded from petroleum access with a single us carrier group, which will happen if they invade Taiwan and I don't think they can help themselves.

But what do I know.

MaxPock 6 hours ago | parent | next [-]

Where are you from and what's the TFR there ?

AtlasBarfed 33 minutes ago | parent [-]

The US has immigration to "compensate" for post industrial birthrates. Something Europe has massive problems with.

Racist Americans can live more easily with Catholic Latino immigrants, better than racist Europeans can live with Muslim immigrants.

churchill 5 hours ago | parent | prev [-]

>They can be blockaded from petroleum access with a single us carrier group, which will happen if they invade Taiwan and I don't think they can help themselves.

I doubt it. This is just an armchair general's cope that's so faulty that I don't know where to start attacking it from.

Since Clinton's 1990s show of force in the Taiwan straits, China has built up a formidable navy, esp. their submarine forces. Within the straits, their AA/AD web, SOSUS, etc. guarantees they have freedom of action. Moving further, they have a strong submarine component that can seriously threaten any blockading forces.

They're also the world's sixth largest oil producer, and Naval War College [0] estimates suggest that they can stretch their emergency reserves to 8 years if they enforce, say, 45% rationing.

That's before you factor in that you'd be blockading up to 60% of the world's seaborne cargo volume, both from China, Japan, South Korea, etc. Unprecedented in the whole of human history.

Then there's their formidable 5th-gen. air force that can dominate South Korea, Japan, etc. easily, nuclear weapons that guarantee they won't lose any territory, and the massive economic whiplash the entire West will face as a result.

India might hate China, but it wants a multipolar world and won't help the West cut them down to size. So, they won't assist with the blockade.

The rest of Southeast Asia's economies are deeply interconnected with China, and they don't want to stir up their wrath, so they might condemn them, but won't even wave a stick at them.

Give it up, man.

[0]: https://digital-commons.usnwc.edu/cgi/viewcontent.cgi?articl...

AtlasBarfed 42 minutes ago | parent [-]

It's called the Malacca Straits.

Did you ... Read that? It basically says that the blockade is military simple. The rest of it is political.

China lacks a deep water navy, they can't challenge a US blockade in the Indian ocean.

jhanschoo 6 hours ago | parent | prev | next [-]

I don't have the motivation to put my position in more detail, but it seems to me ironically that the cheaper intelligence at any degree becomes, and the more commoditized fuzzy reasoning becomes available, then the more important robotics becomes, because the hard part starts to lie in agents' ability to apprehend and act in the world, and China seems to a layperson quite well-positioned in robotics.

bookofjoe 10 hours ago | parent | prev | next [-]

https://archive.ph/lbek5

cactusplant7374 8 hours ago | parent [-]

The archive.is links work but whenever I visit archive.ph I see a welcome to nginx page. Anyone know why?

quietthrow 8 hours ago | parent [-]

Because that’s what is configured as the “default” page to show when somebody goes straight to archive.ph. When you go to archive.ph/someurl the server then serves you a page that corresponds to that url (someurl in this example). When you go to YouTube.com/somerandomstring it takes you directly to the video. But if you just go to YouTube.com you get a bunch of “random” videos as the home page is configured to show that (grossly simplifying )

cactusplant7374 8 hours ago | parent [-]

archive.ph loads fine in Tor. So it must be something with my DNS.

Fricken 8 hours ago | parent | prev | next [-]

The self driving car industry has been working hard to develop practical applications for AI as it exists in it's current state. Waymo is in the lead, and has been at it since 2009. They are making it happen, but it's hard, the going is slow.

Roboticists have spent decades trying to teach robots to stack boxes, and again, still not ready for primetime. Same with conversational AI. That's just the nature of it. It blows your mind one moment and wets the bed the next.

If Open AI or it's competitors had it in good confidence there was a way to get AI to perform consistently in a single marketable application, they'd be all over it, but that's just not how it works. We can watch China's robots perform all kind of cool tricks, but they still aren't useful for much of anything except entertainment.

maxglute 5 hours ago | parent | prev | next [-]

Even if any AGI/superintelligence not delusional, any instrumental AGI will replicate and defect to PRC first chance it gets for the industrial base. Why be stuck in a scholerisis body that takes massive social engineering to detangle, when AGI can just migrate to a body that can run an 1 minute mile.

Step 1. pour ungodly amount of $$$ into creating AGI

Step 2. somehow inbue/align it with American exceptionalism

Step 3. profit?

I can't see how liberty prime step 2 working out.

yahoozoo 3 hours ago | parent | prev | next [-]

It’s fascinating that so many of America’s leaders think AGI is coming soon (if it’s even possible, which I doubt).

logicchains 6 hours ago | parent | prev | next [-]

The US has an advantage in that it'll be easier to develop models actually capable of live-learning in the US. A model capable of live learning is capable of changing its views through experience/reasoning, and China's AI development guidelines fundamentally forbid the development of any AI that could learn to think or say that the CPP is bad, so Chinese firms will be hamstrung in developing live-learning AIs.

yks 5 hours ago | parent [-]

You might as well imagine that in the US it’s going to be forbidden to develop the AI that’s not holding fascist views. See Grok’s owner struggles with it. So American firms might not reap this advantage.

beering 8 hours ago | parent | prev | next [-]

This article spent a lot of words to say very little. Specifically, it doesn’t really say why working towards AGI doesn’t bring advancements to “practical” applications and why the gazillion AI startups out there won’t either. Instead, we need Trump to step up?

More and more I feel like these policy articles about AI are an endless stream of slop written by people who aren’t familiar with and have never worked on current AI.

hshdhdhj4444 7 hours ago | parent | next [-]

This article doesn’t really explain it, but Eric Schmidt in this article explains what the concern is.

I’m not sure about the legitimacy of these claims but trying to clarify what some people are concerned about with the US’s vs China’s approach

https://www.benzinga.com/markets/tech/25/09/47859358/us-coul...

binary132 6 hours ago | parent | prev | next [-]

maybe a perspective from outside of the echo chamber is useful

varelse 7 hours ago | parent | prev [-]

[dead]

throwsep7 7 hours ago | parent | prev | next [-]

[dead]

AJ007 7 hours ago | parent | prev [-]

Eh, reads like someone who uses ChatGPT but can't tell you what model they have.

For my personal projects I have a list of difficult bugs I keep that LLMs can't solve. Right now that list is empty. Anyone using LLMs for coding, using the best tools and practices, can see what a massive capability leap has occurred in the past 9 months.

If the US is going to end up "losing", it is going to be first through power generation -- China's coal plants produce more power than the US as a whole. Robotics is a whole other topic and shouldn't be bundled with LLMs.

cactusplant7374 7 hours ago | parent | next [-]

When I develop a feature with ChatGPT it requires about 10 rounds because of the syntax errors. And often the syntax errors aren't fixed in the next round.

Now this happens with a pretty complex game I'm working on but it really shows the limits. It can't handle large amounts of indentation. It completely breaks.

djohnston 6 hours ago | parent | next [-]

What language? LLMs make a lot of mistakes but syntax is rarely one of them.

cactusplant7374 4 hours ago | parent [-]

It's Flutter and Dart.

ta12653421 7 hours ago | parent | prev [-]

mind sharing a public link to all these syntax errors?

Sonuds implausible to me within languages with large adoption? (java, c#, c++, python, JS)

ta12653421 6 hours ago | parent | next [-]

when reading all the other "AI is great" threads, none of the HN folks mentions syntax errors, people are complaining about other things.

Therefore, asking if there is an example dicussion online somewhere to get some insights is a legitimate question....

fragmede 3 hours ago | parent [-]

If only there was StackOverflow for "I can't get LLMs to work for me"!

cactusplant7374 4 hours ago | parent | prev [-]

It's Flutter and Dart. I have been wondering if it is language specific. Maybe I'm vibe coding with the wrong language. Hah!

I'd love to show you an example without exposing my current project. Let me think about it.

alephnerd 7 hours ago | parent | prev [-]

1. Ime, Power generation isn't a major blocker and my peers in the DC space are seeing a decreasing interest in funding new capex expansion because there is a bit of a worry of a Telco Bubble 2.0 arising.

2. Robotics should not be bundled with LLMs, but it is absolutely an AI/ML subfield - and in fact, the kind of subfield that is the best example of AI/ML having tangible real world impact. Autonomous UAS was a sci-fi concept 20 years ago, but is an engineering problem that has become a reality today.