Remix.run Logo
therein 5 days ago

Neither are even close to AGI. Here is something they can't do and won't be able to do for a very long time:

If you're inferring in English and ask it a question, it will never be able to pull from the knowledge it has ingested in another language. Humans are able to do this without relying on a neurotic inner voice spinning around in circles and doing manual translations.

This should be enough to arrive at the conclusion that there is no real insights in the model. It has no model of the world.

Jensson 4 days ago | parent [-]

> If you're inferring in English and ask it a question, it will never be able to pull from the knowledge it has ingested in another language. Humans are able to do this without relying on a neurotic inner voice spinning around in circles and doing manual translations.

This is not true, this is the biggest strength of LLM that they are very language agnostic since they can parse things down to more general concepts. There are many things they are bad at but using things from other languages is not one of them.

therein 4 days ago | parent [-]

It is true, and LLMs do no such things. You are getting that impression not because they are language agnostic across training and inference but because they are throwing multi-language text at it during training. Try asking it about nanomaterials in Chewa language.