Remix.run Logo
karmasimida 2 days ago

No denial at this point, AI could produce something novel, and they will be doing more of this moving forward.

XCSme 2 days ago | parent | next [-]

Not sure if AI can have clever or new ideas, it still seems to be it combines existing knowledge and executes algoritms.

I am not necessarily saying humans do something different either, but I have yet to see a novel solution from an AI that is not simply an extrapolation of current knowledge.

qnleigh 2 days ago | parent | next [-]

Speaking as a researcher, the line between new ideas and existing knowledge is very blurry and maybe doesn't even exist. The vast majority of research papers get new results by combining existing ideas in novel ways. This process can lead to genuinely new ideas, because the results of a good project teach you unexpected things.

My biggest hesitation with AI research at the moment is that they may not be as good at this last step as humans. They may make novel observations, but will they internalize these results as deeply as a human researcher would? But this is just a theoretical argument; in practice, I see no signs of progress slowing down.

coderenegade 2 days ago | parent [-]

This is my take as well. A human who learns, say, a Towers of Hanoi algorithm, will be able to apply it and use it next time without having to figure it out all over again. An LLM would probably get there eventually, but would have to do it all over again from scratch the next time. This makes it difficult combine lessons in new ways. Any new advancement relying on that foundational skill relies on, essentially, climbing the whole mountain from the ground.

I suppose the other side of it is that if you add what the model has figured out to the training set, it will always know it.

dotancohen 2 days ago | parent | prev | next [-]

We call that Standing On The Shoulders Of Giants and revere Isaac Newton as clever, even though he himself stated that he was standing on the shoulders of giants.

nkozyra 2 days ago | parent | prev | next [-]

Clever/novel ideas are very often subtle deviations from known, existing work.

Sometimes just having the time/compute to explore the available space with known knowledge is enough to produce something unique.

salomonk_mur 2 days ago | parent | prev | next [-]

There is no such thing. All new ideas are derived from previous experiences and concepts.

Madmallard 2 days ago | parent [-]

The difference people are neglecting to point out is the experiences we have versus the experiences the AI has.

We have at least 5 senses, our thoughts, feelings, hormonal fluctuations, sleep and continuous analog exposure to all of these things 24/7. It's vastly different from how inputs are fed into an LLM.

On top of that we have millions of years of evolution toward processing this vast array of analog inputs.

XCSme 2 days ago | parent [-]

So, just connect LLMs to lava lamps?

Jokes aside, imagine you give LLMs access to real-time, world-wide satellite imagery and just tell it to discover new patrerns/phenomens and corrrlations in the world.

glalonde 2 days ago | parent | prev | next [-]

"extrapolation" literally implies outside the extents of current knowledge.

XCSme 2 days ago | parent [-]

Yes, but not necessarily new knowledge.

It means extending/expanding something, but the information is based on the current data.

In computer games, extrapolation is finding the future position of an object based on the current position, velocity and time wanted. We do have some "new" position, but the sistem entropy/information is the same.

Or if we have a line, we can expand infinitely and get new points, but this information was already there in the y = m * x + b line formula.

aoeusnth1 2 days ago | parent | prev [-]

How would you know if it wasn't an extrapolation of current knowledge? Can you point me to somethings humans have done which isn't an extrapolation?

XCSme 2 days ago | parent [-]

That was my point: "I am not necessarily saying humans do something different".

leptons 2 days ago | parent | prev | next [-]

[flagged]

snypher 2 days ago | parent [-]

Your analogy falls apart if we consider the number wasn't on the clock face.

MattGaiser 2 days ago | parent [-]

I am deeply baffled by AI denial at this point.

wtallis 2 days ago | parent | next [-]

Complete denial that AI/LLMs can produce novel, good things is an indefensible stance at this point. But the large volume of AI slop is still an unsolved problem, and the claim that "AI will still mostly deliver slop" seems to be almost certainly correct in the near-term.

We've had a few decades to address email spam, and still haven't manage to disincentivize it enough to stop being the main challenge for email as a communication medium. I don't think there's much hope that we'll be able to disincentive the widespread, large-scale creation of AI slop even after more expensive models with higher-quality output are available.

bigstrat2003 2 days ago | parent | prev [-]

It's quite simple: it has yet to show it can actually be useful, and all the claims that it can have (so far) turned out to be self delusion if not deliberate lies. When the industry is run by grifters, you shouldn't really be surprised when people stop believing them.

Philpax 2 days ago | parent | next [-]

you are posting in a thread about it finding a novel solution to an unsolved mathematics problem

2 days ago | parent | prev [-]
[deleted]
slashdave 2 days ago | parent | prev | next [-]

I mean, I can run a pseudo random number generator, and produce something novel too.

staticassertion 2 days ago | parent | prev [-]

Is this novel? It's new. But we already know AI can generate new things, any statistical reassembly of any content will generate new things.

It's not to downplay this, but it's unclear what "novel" means here or what you think the implications are.