Remix.run Logo
pessimizer 18 hours ago

There's a ground truth to human cognition in that we have to feed ourselves and survive. We have to interact with others, reap the results of those interactions, and adjust for the next time. This requires validation layers. If you don't see them, it's because they're so intrinsic to you that you can't see them.

You're just indulging in sort of idle cynical judgement of people. To lie well even takes careful truthful evaluation of the possible effects of that lie and the likelihood and consequences of being caught. If you yourself claim to have observed a lie, and can verify that it was a lie, then you understand a truth; you're confounding truthfulness with honesty.

So that's the (obvious) distinction. A distributed algorithm that predicts likely strings of words doesn't do any of that, and doesn't have any concerns or consequences. It doesn't exist at all (even if calculation is existence - maybe we're all reductively just calculators, right?) after your query has run. You have to save a context and feed it back into an algorithm that hasn't changed an iota from when you ran it the last time. There's no capacity to evaluate anything.

You'll know we're getting closer to the fantasy abstract AI of your imagination when a system gets more out of the second time it trains on the same book than it did the first time.