Remix.run Logo
XCSme 9 hours ago

Not sure if AI can have clever or new ideas, it still seems to be it combines existing knowledge and executes algoritms.

I am not necessarily saying humans do something different either, but I have yet to see a novel solution from an AI that is not simply an extrapolation of current knowledge.

qnleigh 8 hours ago | parent | next [-]

Speaking as a researcher, the line between new ideas and existing knowledge is very blurry and maybe doesn't even exist. The vast majority of research papers get new results by combining existing ideas in novel ways. This process can lead to genuinely new ideas, because the results of a good project teach you unexpected things.

My biggest hesitation with AI research at the moment is that they may not be as good at this last step as humans. They may make novel observations, but will they internalize these results as deeply as a human researcher would? But this is just a theoretical argument; in practice, I see no signs of progress slowing down.

coderenegade 5 hours ago | parent [-]

This is my take as well. A human who learns, say, a Towers of Hanoi algorithm, will be able to apply it and use it next time without having to figure it out all over again. An LLM would probably get there eventually, but would have to do it all over again from scratch the next time. This makes it difficult combine lessons in new ways. Any new advancement relying on that foundational skill relies on, essentially, climbing the whole mountain from the ground.

I suppose the other side of it is that if you add what the model has figured out to the training set, it will always know it.

dotancohen 8 hours ago | parent | prev | next [-]

We call that Standing On The Shoulders Of Giants and revere Isaac Newton as clever, even though he himself stated that he was standing on the shoulders of giants.

nkozyra 8 hours ago | parent | prev | next [-]

Clever/novel ideas are very often subtle deviations from known, existing work.

Sometimes just having the time/compute to explore the available space with known knowledge is enough to produce something unique.

salomonk_mur 8 hours ago | parent | prev | next [-]

There is no such thing. All new ideas are derived from previous experiences and concepts.

Madmallard 6 hours ago | parent [-]

The difference people are neglecting to point out is the experiences we have versus the experiences the AI has.

We have at least 5 senses, our thoughts, feelings, hormonal fluctuations, sleep and continuous analog exposure to all of these things 24/7. It's vastly different from how inputs are fed into an LLM.

On top of that we have millions of years of evolution toward processing this vast array of analog inputs.

XCSme 2 hours ago | parent [-]

So, just connect LLMs to lava lamps?

Jokes aside, imagine you give LLMs access to real-time, world-wide satellite imagery and just tell it to discover new patrerns/phenomens and corrrlations in the world.

glalonde 7 hours ago | parent | prev | next [-]

"extrapolation" literally implies outside the extents of current knowledge.

XCSme 2 hours ago | parent [-]

Yes, but not necessarily new knowledge.

It means extending/expanding something, but the information is based on the current data.

In computer games, extrapolation is finding the future position of an object based on the current position, velocity and time wanted. We do have some "new" position, but the sistem entropy/information is the same.

Or if we have a line, we can expand infinitely and get new points, but this information was already there in the y = m * x + b line formula.

aoeusnth1 6 hours ago | parent | prev [-]

How would you know if it wasn't an extrapolation of current knowledge? Can you point me to somethings humans have done which isn't an extrapolation?

XCSme 2 hours ago | parent [-]

That was my point: "I am not necessarily saying humans do something different".