Remix.run Logo
BoorishBears 5 days ago

I don't get how post GPT-5's launch we're still getting articles where the punchline is "what if these things replace a BUNCH of humans".

stephc_int13 5 days ago | parent | next [-]

Salvatore is right about the fact that we have not seen the full story yet, LLMs are stalling/plateauing but active research is already ongoing to find different architectures and models.

And I think the effort here can be compared in scale to the Manhattan or Apollo projects, but there is also the potential for a huge backlash to the hype that was built up and created what is arguably a bubble, so this is a race against the clock.

I also think he is wrong about the markets reaction, markets are inherently good integrators and bad predictors, we should not expect to learn anything about the future by looking at stocks movements.

gizmo686 5 days ago | parent | next [-]

Manhattan and Apollo were both massive engineering efforts; but fundamentally we understood the science behind them. As long as we would be able to solve some fairly clearly stated engineering problems and spend enough money to actual build the solutions, those projects would work.

A priori, it was not obvious that those clearly stated problems had solutions within our grasp (see fusion) but at least we knew what the big picture looks like.

With AI, we don't have that, and never really had that. We've just been gradually making incremental improvements to AI itself, and exponential improvements in the amount of raw compute we can through at it. We know that we are reaching fundamental limits on transistor density so compute power will plateau unless we find a different paradigm for improvement; and those are all currently in the same position as fusion in terms of engineering.

zdragnar 5 days ago | parent | prev | next [-]

LLMs are just the latest in a very long line of disparate attempts at making AI, and is arguably the most successful.

That doesn't mean the approach isn't an evolutionary dead end, like every other so far, in the search for AGI. In fact, I suspect that is the most likely case.

copperx 5 days ago | parent | prev | next [-]

Current GenAI is nothing but a proof of concept. The seed is there. What AI can do at the moment is irrelevant. This is like the discovery of DNA. It changed absolutely everything in biology.

The fact that something simple like the Transformer architecture can do so much will spark so many ideas (and investment!) that it's hard to imagine that AGI will not happen eventually.

BoorishBears 5 days ago | parent | prev [-]

> Salvatore is right about the fact that we have not seen the full story yet, LLMs are stalling/plateauing but active research is already ongoing to find different architectures and models.

They will need to be so different that any talk implying current LLMs eventually replaced humans will be like saying trees eventually replaced horses because the first cars were wooden.

> And I think the effort here can be compared in scale to the Manhattan or Apollo projects, but there is also the potential for a huge backlash to the hype that was built up and created what is arguably a bubble, so this is a race against the clock.

It's not useful to blindly compare scale. We're not approaching AI like the Manhattan or Apollo projects, we're approaching this like we did crypto, and ads, and other tech.

That's not to say nothing useful will come out of it, I think very amazing things will come out of it and already have... but none of them will resemble mass replacement of skilled workers.

We're already so focused on productization and typical tech distractions that this is nothing like those efforts.

(In fact thinking a bit more, I'd say this is like the Space Shuttle. We didn't try to make the best spacecraft for scientific exploration and hope later on it'd be profitable in other ways... instead we immediately saddled it with serving what the Air Force/DoD wanted and ended up doing everything worse.)

> I also think he is wrong about the markets reaction, markets are inherently good integrators and bad predictors, we should not expect to learn anything about the future by looking at stocks movements.

I agree, so it's wrong about the over half of punchline too.

noduerme 5 days ago | parent [-]

>> mass replacement of skilled workers

unless you consider people who write clickbait blogs to be skilled workers, in which case the damage is already done.

energy123 5 days ago | parent | prev | next [-]

I have to tap the sign whenever someone talks about "GPT-5"

> AI is exceptional for coding! [high-compute scaffold around multiple instances / undisclosed IOI model / AlphaEvolve]

> AI is awesome for coding! [Gpt-5 Pro]

> AI is somewhat awesome for coding! ["gpt-5" with verbosity "high" and effort "high"]

> AI is a pretty good at coding! [ChatGPT 5 Thinking through a Pro subscription with Juice of 128]

> AI is mediocre at coding! [ChatGPT 5 Thinking through a Plus subscription with a Juice of 64]

> AI sucks at coding! [ChatGPT 5 auto routing]

mrbungie 5 days ago | parent | prev | next [-]

People just want to feel special pointing a possibility, so in case it happens, they can then point towards their "insight".

ares623 5 days ago | parent | next [-]

I kind of want to put up a wall of fame/shame of these people to be honest.

Whether they turn out right or wrong, they undoubtedly cheered on the prospect of millions of people suffering just so they can sound good at the family dinner.

I wouldn’t want to work for or with these people.

Davidzheng 5 days ago | parent [-]

sorry but prediction and cheering on is different. If there's a tsunami coming, not speaking about it doesn't help the cause.

nurettin 5 days ago | parent | prev [-]

Or they are experts in one field and think that they have valuable insight into other fields they are not experts on.

K0balt 5 days ago | parent | prev | next [-]

LLMs are limited because we want them to do jobs that are not clearly defined / have difficult to measure progress or success metrics / are not fully solved problems (open ended) / have poor grounding in an external reality. Robotics does not suffer from those maladies. There are other hurdles, but none are intractable.

I think we might see AI being much, much more effective with embodiment.

jazzyjackson 5 days ago | parent | next [-]

do you know how undefined and difficult to measure it is to load silverware into a dishwasher?

K0balt 4 days ago | parent [-]

As someone who actually has built robots to solve similar challenges, I’ve got a pretty good idea of that specific problem. Not too far from putting sticks in a cup, which is doable with a lot of situational variance.

Will it do as good a job a competent adult? Probably not. Will it do it as well as the average 6 year old kid? Yeah, probably.

But given enough properly loaded dishwashers to work from, I think you might be surprised how effective VLA/VLB models can be. We just need a few hundred thousand man hours of dishwasher loading for training data.

chrisco255 5 days ago | parent | prev [-]

What? Robotics will have far more ambiguity and nuance to deal with than language models, and they'll have to analyze realtime audio and video to do so. Jobs are not so clearly defined as you imagine in the real world. For example, explain to me what a plumber does, precisely and how you would train a robot to do so? How do you train it to navigate ANY type of buildings internal plumbing structure and safely repair or install for?

K0balt 4 days ago | parent [-]

I don’t think robot plumbers are coming anytime soon lol. Robot warehouse workers, factory robots, cleaning robots, delivery robots, security robots, general services robots, sure.

Stuff you can give someone 0-20 hours of training and expect them to do 80% as well as someone who has been doing it for 5 years are the kinds of jobs that robots will be able to do, but perhaps with certain technical skills bolted on.

Plumbing a requires the effective understanding and application of engineering knowledge, and I don’t think unsupervised transformer models are going to do that well.

Trades like plumbing that take humans 10-20 years to truly master aren’t the low hanging fruit.

A robot that can pick up a few boxes of roofing at a time and carry it up the ladder is what we need.

bravesoul2 5 days ago | parent | prev | next [-]

What does that have to do with it? One company (desperate to keep runway), one product, one release.

nojito 5 days ago | parent | prev | next [-]

The amount of innovation in the last 6-8 months has been insane.

BoorishBears 5 days ago | parent [-]

Innovation in terms of helping devs do cool things has been insane.

There've been next to no advancements relative to what's needed to redefine our economic systems by replacing the majority of skilled workers.

-

Productionizing test-time compute covers 80% of what we've gotten in the last 6-8 months. Advancements in distillation and quantization cover the 20% of the rest... neither unlocks some path to mass unemployment.

What we're doing is like 10x'ing your vertical leap when your goal is to land on the moon: 10x is very impressive and you're going to dominate some stuff in ways no one ever thought possible.

But you can 100x it and it's still not getting you to the moon.

juped 5 days ago | parent | prev | next [-]

I think GPT-5's backlash was the beginning of the end of the hype bubble, but there's a lot of air to let out of it, as with any hype bubble. We'll see it for quite some time yet.

markmoscov 5 days ago | parent | prev [-]

what if they replace internet comments?

As a large language model developed by OpenAI I am unable to fulfill that request.

asciimov 5 days ago | parent [-]

Not sure the last time you went on reddit, but I wouldn't be surprised if around 20% of posts and comments there are LLM generated.