Remix.run Logo
thw_9a83c 18 hours ago

From many perspectives, the creativity of AI is hugely overrated. If AI were so capable of creating original, innovative content then asking the same question all over again would produce an endless list of very unique outputs. But this is not the case; quite often, it's a shocking opposite. Just give AI image generators the same prompt and observe how the output varies. The same goes for LLMs and coding questions (where it isn't necessarily a disadvantage per se, but it proves the point).

Antibabelic 16 hours ago | parent | next [-]

I'm not one to usually defend AI, but if I understand you correctly, humans also fail your criteria for being capable of creating original, innovative content. If you ask people the same question over and over again, I imagine the variability in the responses you'll get will be quite limited. Tell me if I'm misunderstanding your thought.

markild 16 hours ago | parent | next [-]

While I do think that's true, I'd say a more apt analogy is that for humans each model will produce fairly similar results on each prompt, but it helps having 8 billion different models running.

I'd also argue that we tend to have a larger context. What did you have for dinner? Did you see anything new yesterday? Are you tired of getting asked the same question over and over again?

thw_9a83c 15 hours ago | parent [-]

> humans each model will produce fairly similar results on each prompt, but it helps having 8 billion different models running

Yes, that was my point. We don't have 8 billion AI models. Furthermore, existing models are also trained on heavily overlapping data. The collective creativity and inventiveness of humans far exceeds what AI can currently do for us.

throwbway37383 16 hours ago | parent | prev [-]

You say you don't usually defend LLMs, and then give a defense of LLMs based on a giant misreading of what is absolutely standard human behaviour.

In my local library recently, they'd two boards in the lobby as you entered, one with all the drawings created by one class of ~7 year olds based on some book they read, and a second the same idea but the next class up on some other book. Both classes had apparently been asked to do a drawing that illustrated something they liked or thought about the book.

It was absolutely hilarious, and wild, and some genuinely exquisite ones. Some had writing, some didn't. Some had crazy absolutely nonsensical twists and turns in the writing, others more crazy art stuff going on. There were a few tropes that repeated in some of the lazier ones, but even those weren't all the same thing, the way LLM output consistently is, with few exceptions, if any.

And then there were a good number of the ones by the kids which were shockingly inventive, you'd be scratching your head going, geez, how did they come up with that. My partner and I stayed for 10 minutes, and kept noticing some new detail in another of them, and being amazed.

So the reality is the upside-down version of what you're saying.

I recognise that this is just an anecdote on the internet, but surely you know this to be true, variants on the experiment are done in classrooms around the world every day. So may I insist, that the work produced by children, at least, does not fit your odd view of human beings.

Antibabelic 14 hours ago | parent [-]

LLMs and image generation models will also give crazy variable output when you give an open-ended prompt and increase temperature. However, we usually want high coherence and relevance, both from human and synthetic responses.

whiplash451 17 hours ago | parent | prev | next [-]

I't even worse than this. If you ask recent AIs the same question all over again, you might get different answers (with some degree of diversity).

But none of them is novel to human kind. It's novel to you, but not to our species.

AI is nailing us to the manifold that we created at the first place.

dpe82 17 hours ago | parent [-]

Is that really a problem though? Almost nobody does anything "novel to humankind" - besides the odd research professor here and there, we're all just remixing existing stuff in new-to-us ways.

Antibabelic 16 hours ago | parent | next [-]

There's a deliberateness to human creativity that goes beyond simply "remixing existing stuff", even if it's a significant part of it. Think about how you'd write a piece of software. The process behind writing a book or making a painting isn't fundamentally dissimilar. There's a reason why people use the word "derivative" pejoratively.

whiplash451 16 hours ago | parent | prev [-]

The "odd research professor here and there" invented vaccine and quantum mechanics and discovered radioactivity.

None of them would have achieved that with the help of a machine telling them "you're absolutely right!" whenever they'd be asking deep questions to it.

TeMPOraL 15 hours ago | parent [-]

Where "invented" really means "had the right set of skills, knowledge and experience, and was paying attention at the exact right moment when all the pieces of the puzzle were collected together on the table".

Scientific and technological progress is inherently incremental. It takes a lot of hard work, dedication and specialization to spot the pieces ready to be connected - but the final act of putting them together is relatively simple, and most importantly, it requires all the puzzles to be there.

Which is why, historically, ~all scientific discoveries have been made by multiple researchers (or teams) independently, at roughly the same time - until all prerequisites are made, the next step is ~impossible, but the moment all are met, it becomes almost obvious to those in the know.

Antibabelic 14 hours ago | parent [-]

> Which is why, historically, ~all scientific discoveries have been made by multiple researchers (or teams) independently, at roughly the same time

This is quite a big claim. All of them? I know there are many discoveries that fit the pattern you're pointing out, but I wouldn't go as far as to say all, or even the majority of them do.

ako 17 hours ago | parent | prev [-]

So basically you’re saying that LLMs are rather deterministic?