| ▲ | jmward01 4 hours ago |
| The more intelligent something is, the harder it is to control. Are we at AGI yet? No. Are we getting closer? Yes. Every inch closer means we have less control. We need to start thinking about these things less like function calls that have bounds and more like intelligences we collaborate with. How would you set up an office to get things done? Who would you hire? Would you hire the person spouting crazy musk tweets as reality? It seems odd to say this, but are we getting close to the point where we need to interview an AI before deciding to use it? |
|
| ▲ | bigfishrunning 4 hours ago | parent [-] |
| Are we at AGI yet? No. Are we getting closer? Also no. |
| |
| ▲ | birdsongs 44 minutes ago | parent [-] | | Neither of you know the answer to this, in any scientific or statistical manner, and I wish people would stop being so confident about it. If I'm wrong, please give any kind of citation. You can start with defining what human intelligence and sentience is. | | |
| ▲ | jmward01 35 minutes ago | parent [-] | | My argument is that we are getting closer, not that we know exactly what AGI will be. That is clearly part of it right? If we had some boolean definition I suspect we would already be there. Figuring it out is a big part of getting there. I think my points still stand based on this. We aren't there yet but it is hard to deny that these things are growing from a complexity/capability standpoint. On a spectrum from rock to human level intelligence, these are getting closer to human and further from rock and getting further from rock every day. |
|
|