Remix.run Logo
ivraatiems 3 days ago

I think I am coming to agree with the opinions of the author, at least as far as LLMs not being the key to AGI on their own. The sheer impressiveness of what they can do, and the fact they can do it with natural language, made it feel like we were incredibly close for a time. As we adjust to the technology, it starts to feel like we're further away again.

But I still see all the same debates around AGI - how do we define it? what components would it require? could we get there by scaling or do we have to do more? and so on.

I don't see anyone addressing the most truly fundamental question: Why would we want AGI? What need can it fulfill that humans, as generally intelligent creatures, do not already fulfill? And is that moral, or not? Is creating something like this moral?

We are so far down the "asking if we could but not if we should" railroad that it's dazzling to me, and I think we ought to pull back.

jamilton 3 days ago | parent | next [-]

The dream, as I see it, is that AGI could 1, automate research/engineering, such that it would be self-improving and advance technology faster and better than would happen without AGI, improving quality of life, and 2, do a significant amount of the labor, especially physical labor via robotics, that people currently do. 2 would be significant enough in scale that it reduces the amount of labor people need to do on average without lowering quality of life. The political/economic details of that are typically handwaved.

The morality of it depends on the details.

android521 3 days ago | parent | prev [-]

because if people could do it , they would do it. And if you country decides you should not do it, you could be left behid. This possibility prevents any country from not doing it if they could unless they are willing to start wars with other countries for compliance (and they would still secretly do it). So should is an irrelevant quesiton.