| ▲ | threethirtytwo a day ago | |||||||
I didn’t assume agi. I assumed extreme performance of a general AI matching and exceeding average human intelligence when placed in an F16 or an equivalent cockpit specified for conducting math proofs. That’s not agi at all. I don’t think you understand that LLMs will never hit agi even when they exceed human intelligence in all applicable domains. The main reason is they don’t feel emotions. Even if the definition of agi doesn’t currently encompass emotions people like you will move the goal posts and shift the definition until it does. So as AI improves, the threshold will be adjusted to make sure they will never reach agi as it’s an existential and identity crisis to many people to admit that an AI is better than them on all counts. | ||||||||
| ▲ | mkl 15 hours ago | parent [-] | |||||||
> I didn’t assume agi. You literally said: >>> What happens when we put an artificial general intelligence in an F-16? That's what happened here with this proof. You're claiming I said a lot of things I didn't; everything you seem to be stating about me in this comment is false. | ||||||||
| ||||||||