Remix.run Logo
superconduct123 a day ago

Why are the biggest AI predictions always made by people who aren't deep in the tech side of it? Or actually trying to use the models day-to-day...

AlphaAndOmega0 a day ago | parent | next [-]

Daniel Kokotajlo released the (excellent) 2021 forecast. He was then hired by OpenAI, and not at liberty to speak freely, until he quit in 2024. He's part of the team making this forecast.

The others include:

Eli Lifland, a superforecaster who is ranked first on RAND’s Forecasting initiative. You can read more about him and his forecasting team here. He cofounded and advises AI Digest and co-created TextAttack, an adversarial attack framework for language models.

Jonas Vollmer, a VC at Macroscopic Ventures, which has done its own, more practical form of successful AI forecasting: they made an early stage investment in Anthropic, now worth $60 billion.

Thomas Larsen, the former executive director of the Center for AI Policy, a group which advises policymakers on both sides of the aisle.

Romeo Dean, a leader of Harvard’s AI Safety Student Team and budding expert in AI hardware.

And finally, Scott Alexander himself.

kridsdale3 a day ago | parent | next [-]

TBH, this kind of reads like the pedigrees of the former members of the OpenAI board. When the thing blew up, and people started to apply real scrutiny, it turned out that about half of them had no real experience in pretty much anything at all, except founding Foundations and instituting Institutes.

A lot of people (like the Effective Altruism cult) seem to have made a career out of selling their Sci-Fi content as policy advice.

MrScruff 16 hours ago | parent | next [-]

I kind of agree - since the Bostrom book there is a cottage industry of people with non-technical backgrounds writing papers about singularity thought experiments, and it does seem to be on the spectrum with hard sci-fi writing. A lot of these people are clearly intelligent, and it's not even that I think everything they say is wrong (I made similar assumptions long ago before I'd even heard of Ray Kurzweil and the Singularity, although at the time I would have guessed 2050). It's just that they seem to believe their thought process and Bayesian logic is more rigourous than it actually is.

flappyeagle a day ago | parent | prev [-]

c'mon man, you don't believe that, let's have a little less disingenuousness on the internet

arduanika a day ago | parent [-]

How would you know what he believes?

There's hype and there's people calling bullshit. If you work from the assumption that the hype people are genuine, but the people calling bullshit can't be for real, that's how you get a bubble.

flappyeagle 3 hours ago | parent [-]

Because they are not the same in any way. It’s not a bunch of junior academics, it’s literally including someone who worked at OpenAI

pixodaros 9 hours ago | parent | prev | next [-]

Scott Alexander, for what its worth, is a psychiatrist, race science enthusiast, and blogger whose closest connection to software development is Bay Area house parties and a failed startup called MetaMed (2012-2015) https://rationalwiki.org/wiki/MetaMed

7 hours ago | parent | prev | next [-]
[deleted]
nice_byte 19 hours ago | parent | prev | next [-]

this sounds like a bunch of people who make a living _talking_ about the technology, which lends them close to 0 credibility.

mickelsen 18 hours ago | parent [-]

[dead]

superconduct123 a day ago | parent | prev | next [-]

I mean either researchers creating new models or people building products using the current models

Not all these soft roles

a day ago | parent | prev [-]
[deleted]
torginus a day ago | parent | prev | next [-]

Because these people understand human psychology and how to play on fears (of doom, or missing out) and insecurities of people, and write compelling narratives while sounding smart.

They are great at selling stories - they sold the story of the crypto utopia, now switching their focus to AI.

This seems to be another appeal to enforce AI regulation in the name of 'AI safetyiism', which was made 2 years ago but the threats in it haven't really panned out.

For example an oft repeated argument is the dangerous ability of AI to design chemical and biological weapons, I wish some expert could weigh in on this, but I believe the ability to theorycraft pathogens effective in the real world is absolutely marginal - you need actual lab work and lots of physical experiments to confirm your theories.

Likewise the dangers of AI systems to exfiltrate themselves to multi-million dollar AI datacenter GPU systems everyone supposedly just has lying about, is ... not super realistc.

The ability of AIs to hack computer systems is much less theoretical - however as AIs will get better at black-hat hacking, they'll get better at white-hat hacking as well - as there's literally no difference between the two, other than intent.

And here in lies a crucial limitation of alignment and safetyism - sometimes there's no way to tell apart harmful and harmless actions, other than whether the person undertaking them means well.

ZeroTalent a day ago | parent | prev | next [-]

People who are skilled fiction writers might lack technical expertise. In my opinion, this is simply an interesting piece of science fiction.

rglover a day ago | parent | prev | next [-]

Aside from the other points about understanding human psychology here, there's also a deep well they're trying to fill inside themselves. That of being someone who can't create things without shepherding others and see AI as the "great equalizer" that will finally let them taste the positive emotions associated with creation.

The funny part, to me, is that it won't. They'll continue to toil and move on to the next huck just as fast as they jumped on this one.

And I say this from observation. Nearly all of the people I've seen pushing AI hyper-sentience are smug about it and, coincidentally, have never built anything on their own (besides a company or organization of others).

Every single one of the rational "we're on the right path but not quite there" takes have been from seasoned engineers who at least have some hands-on experience with the underlying tech.

FeepingCreature 16 hours ago | parent | prev | next [-]

I use the models daily and agree with Scott.

Tenoke a day ago | parent | prev | next [-]

..The first person listed is ex-OpenAI.

bpodgursky a day ago | parent | prev | next [-]

Because you can't be a full time blogger and also a full time engineer. Both take all your time, even ignoring time taken to build talent. There is simply a tradeoff of what you do with your life.

There are engineers with AI predictions, but you aren't reading them, because building an audience like Scott Alexander takes decades.

m11a 3 hours ago | parent [-]

If so, then it seems the solution is for HN to upvote the random qualified engineer with AI predictions?

ohgr a day ago | parent | prev [-]

In the path to self value people explain their worth by what they say not what they know. If what they say is horse dung, it is irrelevant to their ego if there is someone dumber than they are listening.

This bullshit article is written for that audience.

Say bullshit enough times and people will invest.

HeatrayEnjoyer 20 hours ago | parent [-]

So what's the product they're promoting?

moralestapia 19 hours ago | parent [-]

Their ego.