Remix.run Logo
simonh 4 days ago

It’s often been assumed that accuracy and ‘correctness’ would be easy to implement on computers because they operate on logic, in some sense. It’s originality and creativity that would be hard, or impossible because it’s not logical. Science Fiction has been full of such assumptions. Yet here we are, the actual problem is inventing new heavy enough training sticks to beat our AIs out of constantly making stuff up and lying about it.

I suppose we shouldn’t be surprised in hindsight. We trained them on human communicative behaviour after all. Maybe using Reddit as a source wasn’t the smartest move. Reddit in, Reddit out.

smallmancontrov 4 days ago | parent | next [-]

Pre-training gets you GPT-3, not InstructGPT/ChatGPT. During fine-tuning OpenAI (and everyone else) specifically chose to "beat in" a heavy bias-to-action because a model that just answers everything with "it depends" and "needs more info" is even more useless than a model that turns every prompt into a creative writing exercise. Striking a balance is simply a hard problem -- and one that many humans have not mastered for themselves.

root_axis 4 days ago | parent | prev | next [-]

> It’s often been assumed that accuracy and ‘correctness’ would be easy to implement on computers because they operate on logic, in some sense. It’s originality and creativity that would be hard

More fundamental than the training data is the fact that the generative outputs are statistical, not logical. This is why they can produce a sequence of logical steps but still come to incorrect or contradictory conclusions. This is also why they tackle creativity more easily since the acceptable boundaries of creative output is less rigid. A photorealistic video of someone sawing a cloud in half can still be entertaining art despite the logical inconsistencies in the idea.

HankStallone 4 days ago | parent | prev | next [-]

The worst news I've seen about AI was a study that said the major ones get 40% of their references from Reddit (I don't know how they determined that). That explains the cloying way it tries to be friendly and supportive, too.

sandbags 4 days ago | parent [-]

I saw someone reference this today and the question I had was whether this counted the trillions of words accrued from books and other sources. i.e. is it 40%? Or 40% of what they can find a direct attribution link for?

dragonwriter 4 days ago | parent | prev [-]

> It’s often been assumed that accuracy and ‘correctness’ would be easy to implement on computers because they operate on logic, in some sense. It’s originality and creativity that would be hard, or impossible because it’s not logical.

It is easy, comparatively. Accuracy and correctness is what computers have been doing for decades, except when people have deliberately compromised that for performance or other priorities (or used underlying tools where someone else had done that, perhaps unwittingly.)

> Yet here we are, the actual problem is inventing new heavy enough training sticks to beat our AIs out of constantly making stuff up and lying about it.

LLMs and related AI technologies are very much an instance of extreme deliberate compromise of accuracy, correctness, and controllability to get some useful performance in areas where we have no idea how to analytically model the expected behavior but have lots of more or less accurate examples.