Remix.run Logo
jondwillis 2 days ago

Workshopping this tortured metaphor:

AI, at the limit, is a vampiric technology, sucking the differentiated economic value from those that can train it. What happens when there are no more hosts to donate more training-blood? This, to me, is a big problem, because a model will tend to drift from reality without more training-blood.

The owners of the tech need to reinvest in the hosts.

hephaes7us 2 days ago | parent | next [-]

Realistically, at a certain point the training would likely involve interaction with reality (by sensors and actuators), rather than relying on secondhand knowledge available in textual form.

kfarr 2 days ago | parent [-]

Yeah I feel like the real ah ha moment is still coming once there is a GPT-like thing that has been trained on reality, not its shadow.

chongli 2 days ago | parent | next [-]

Yes and reality is the hard part. Moravec’s Paradox [1] continues to ring true. A billion years of evolution went into our training to be able to cope with the complexity of reality. Our language is a blink of an eye compared to that.

[1] https://en.wikipedia.org/wiki/Moravec's_paradox

baq 2 days ago | parent | prev | next [-]

Reality cannot be perceived. A crisp shadow is all you can hope for.

The problem for me is the point of the economy in the limit where robots are better, faster and cheaper than any human at any job. If the robots don’t decide we’re worth keeping around we might end up worse than horses.

agos a day ago | parent [-]

but that crisp shadow is exactly what we call perception

qsera 2 days ago | parent | prev [-]

Look I think that is the whole difficulty. In reality, doing the wrong thing results in pain, and the right thing in relief/pleasure. A living thing will learn from that.

But machines can experience neither pain nor pleasure.

visarga 2 days ago | parent | prev | next [-]

> What happens when there are no more hosts to donate more training-blood?

LLMs have over 1B users and exchange over 1T tokens with us per day. We put them through all conceivable tasks and provide support for completing those tasks, and push back when the model veers off. We test LLM ideas in reality (like experiment following hypothesis) and use that information to iterate. These logs are gold for training on how to apply AI in real world.

scotty79 2 days ago | parent | prev [-]

There's only so much you can learn from humans. AI didn't get superhuman in go (game) by financing more new good human go players. It just played with itself even discarding human source knowledge and achieved those levels.