Remix.run Logo
Topgamer7 2 days ago

Whenever someone brings up "AI", I tell them AI is not real AI. Machine learning is a more apt buzzword.

And real AI is probably like fusion. Its always 10 years away.

CSSer 2 days ago | parent | next [-]

The best part of this is I watched Sam Altman say he really thinks fusion is a short period of time away in response to a question about energy consumption a couple years ago. That was the moment I knew he's a quack.

ctkhn a day ago | parent | next [-]

Not to be anti YC on their forum, but the VC business model is all about splashing cash on a wide variety of junk that will mostly be worthless, hyping it to the max, and hoping one or two is like amazon or facebook. He's not an engineer, he's like Steve Jobs without the good parts.

jacobolus a day ago | parent | prev | next [-]

Altman recently said, in response to a question about the prospect of half of entry-level white-collar jobs being replaced by "AI" and college graduates being put out of work by it:

> “I mean in 2035, that, like, graduating college student, if they still go to college at all, could very well be, like, leaving on a mission to explore the solar system on a spaceship in some completely new, exciting, super well-paid, super interesting job, and feeling so bad for you and I that, like, we had to do this kind of, like, really boring old kind of work and everything is just better."

Which should be reassuring to anyone having trouble finding an entry-level job as an illustrator or copywriter or programmer or whatever.

rightbyte a day ago | parent [-]

So STNG in 10 years?

edit: Oh. Solar system. Nvm. Totally reasonable.

SAI_Peregrinus a day ago | parent | prev | next [-]

Fusion is 8 light-minutes away. The connection gets blocked often, so methods to buffer power for those periods are critical, but they're getting better so it's gotten a lot more practical to use remote fusion power at large scales. It seems likely that the power buffering problem is easier to solve than the local fusion problem, so more development goes to improving remote fusion power than local.

rohit89 a day ago | parent | prev | next [-]

Sam is an investor in a fusion startup. In any case, how long it takes us to get to working fusion is proportional to the amount of funding it recieves. I'm hopeful that increased energy needs will spur more investment into it.

timeon a day ago | parent | prev | next [-]

He had to use distraction because he knows that he is doing part in increasing emissions.

2OEH8eoCRo0 a day ago | parent | prev [-]

Fusion is known science while AGI is still very much an enigma.

CharlesW 2 days ago | parent | prev | next [-]

> Whenever someone brings up "AI", I tell them AI is not real AI.

You and also everyone since the beginning of AI. https://quoteinvestigator.com/2024/06/20/not-ai/

zamadatix 2 days ago | parent [-]

People saying that usually mean it as "AI is here and going to change everything overnight now" yet, if you take it literally, it's "we're actually over 50 years into AI, things will likely continue to advance slowly over decades".

The common thread between those who take things as "AI is anything that doesn't work yet" and "what we have is still not yet AI" is "this current technology could probably have used a less distracting marketing name choice, where we talk about what it delivers rather than what it's supposed to be delivering".

adastra22 2 days ago | parent | prev | next [-]

Machine learning as a descriptive phrase has stopped being relevant. It implies the discovery of information in a training set. The pre-training of an LLM is most definitely machine learning. But what people are excited and interested in is the use of this learned data in generative AI. “Machine learning” doesn’t capture that aspect.

simpleladle a day ago | parent | next [-]

But the things we try to make LLMs do post-pre-training are primarily achieved via reinforcement learning. Isn't reinforcement learning machine learning? Correct me if I'm misconstruing what you're trying to say here

adastra22 a day ago | parent [-]

You are still talking about training. Generative applications have always been fundamentally different from classification problems, and has now (in the form of transformers and diffusion models) taken on entirely new architectures.

If “machine learning” is taken to be so broad as to include any artificial neural network, all of which are trained with back propagation these days, then it is useless as a term.

The term “machine learning” was coined in the era of specialized classification agents that would learn how to segment inputs in some way. Thing email spam detection, or identifying cat pictures. These algorithms are still an essential part of both the pre-training and RLHF fine tuning of LLM models. But the generative architectures are new and very essential to the current interest in and hype surrounding AI at this point in time.

hnuser123456 2 days ago | parent | prev [-]

It's a valid term that is worth introducing to the layperson IMO. Let them know how the magic works, and how it doesn't.

adastra22 2 days ago | parent | next [-]

Machine learning is only part of how an LLM agent works though. An essential part, but only a part.

sdenton4 a day ago | parent [-]

I see a fair amount of bullshit in the LLM space though, where even cursory consideration would connect the methods back to well-known principles in ML (and even statistics!) to measure model quality and progress. There's a lot of 'woo, it's new! we don't know how to measure it exactly but we think it's groundbreaking!' which is simply wrong.

From where I sit, the generative models provide more flexibility but tend to underperform on any particular task relative to a targeted machine learning effort, once you actually do the work on comparative evaluation.

adastra22 a day ago | parent [-]

I think we have a vocabulary problem here, because I am having a hard time understanding what you are trying to say.

You appear to be comparing apples to oranges. A generation task is not a categorization task. Machine learning solves categorization problems. Generative AI uses model trained by machine learning methods, but in a very different architecture to solve generative problems. Completely different and incomparable application domain.

ainch a day ago | parent | next [-]

I think you're overstating the distinction between ML and generation - plenty of ML methods involve generative models. Even basic linear regression with a squared loss can also be framed as a generative model derived by assuming Gaussian noise. Probabilistic PCA, HMMs, GMMs etc... generation has been a core part of ML for over 20 years.

sdenton4 a day ago | parent | prev [-]

And yet, people very often find themselves using generative models for categorization and information retrieval tasks...

IshKebab a day ago | parent | prev [-]

How does "it's called machine learning not AI" help anyone know how it works? It's just a fancier sounding name.

hnuser123456 a day ago | parent [-]

Because if they're curious, they can look up (or ask an "AI") about machine learning, rather than just AI, and learn more about the capabilities and difficulties and mechanics of how it works, learn some of the history, and have grounded expectations for what the next 10 years of development might look like.

IshKebab a day ago | parent [-]

They can google AI too... Do you think googling "how does AI work" won't work?

bcrosby95 2 days ago | parent | prev | next [-]

AI is an overloaded term.

I took an AI class in 2001. We learned all sorts of algorithms classified as AI. Including various ML techniques. Under which included perceptrons.

timidiceball a day ago | parent | next [-]

That was an impressive takeaway from the first machine learning course i took: that many things previously under the umbrella of Artificial Intelligence have since been demystified and demoted to implementations we now just take for granted. Some examples were real world map route planning for transport, locating faces in images, Bayesian spam filters.

porphyra 2 days ago | parent | prev [-]

back in the day alpha-beta search was AI hehe

pixelpoet a day ago | parent [-]

As a young child in Indonesia we had an exceptionally fancy washing machine with all sorts of broken English superlatives on it, including "fuzzy logic artificial intelligence" and I used to watch it doing the turbo spin or whatever, wondering what it was thinking. My poor mom thought I was retarded.

porphyra a day ago | parent [-]

My rice cooker also has fuzzy logic. I guess they just use floats instead of bools.

brandonb 2 days ago | parent | prev | next [-]

Andrew Ng has a nice quote: “Instead of doing AI, we ended up spending our lives doing curve fitting.”

Ten years ago you'd be ashamed to call anything "AI," and say machine learning if you wanted to be taken seriously, but neural networks have really have brought back the term--and for good reason, given the results.

wilg 2 days ago | parent | prev | next [-]

Arguing about the definitions of words is rarely useful.

Spare_account 2 days ago | parent | next [-]

How can we discuss <any given topic> if we are talking about different things?

IanCal a day ago | parent | next [-]

Well that's rather the point - arguing about exceptionally heavily used terminology isn't useful because there's already a largely shared understanding. Stepping away from that is a huge effort, unlikely to work and at best all you've done is change what people mean when they use a word.

bcrosby95 a day ago | parent | prev [-]

The point is to establish definitions rather than argue about them. You might save yourself from two pointless arguments.

Root_Denied a day ago | parent | prev [-]

Except AI already had a clear definition well before it started being used as a way to inflate valuations and push marketing narratives.

If nothing else it's been a sci-fi topic for more than a century. There's connotations, cultural baggage, and expectations from the general population about what AI is and what it's capable of, most of which isn't possible or applicable to the current crop of "AI" tools.

You can't just change the meaning of a word overnight and toss all that history away, which is why it comes across as an intentionally dishonest choice in the name of profits.

layer8 a day ago | parent | next [-]

Maybe do some reading here: https://en.wikipedia.org/wiki/History_of_artificial_intellig...

Root_Denied a day ago | parent [-]

And you should do some reading into the edit history of that page. Wikipedia isn't immune from concerted efforts to astroturf and push marketing narratives.

More to the point, the history of AI up through about 2010 talks about attempts to get it working using different approaches to the problem space, followed by a shift in the definitions of what AI is in the 2005-2015 range (narrow AI vs. AGI). Plenty of talk about the various methods and lines fo research that were being attempted, but very little about publicly pushing to call commercially available deliverables as AI.

Once we got to the point where large amounts of VC money was being pumped into these companies there was an incentive to redefine AI in favor of what was within the capabilities and scope of machine learning and LLMs, regardless of whether that fit into the historical definition of AI.

wilg a day ago | parent | prev [-]

I do not care what anyone thinks the definition is, nor should you.

layer8 a day ago | parent | prev | next [-]

AI is whatever is SOTA in the field, has always been.

lo_zamoyski 2 days ago | parent | prev | next [-]

AI is in the eye of the beholder.

huflungdung 2 days ago | parent | prev [-]

[dead]