| ▲ | fragmede 5 hours ago | ||||||||||||||||
Why not? | |||||||||||||||||
| ▲ | apsurd 5 hours ago | parent [-] | ||||||||||||||||
The most concise answer as of now is because AI has no "will". LLMs are objectively smarter than any one person so in some definition we've already created super-intelligence. The problem is they just sit there. They have all the answers already, if you think about it. Whenever we ask it something it gives us the answer, it's amazing, we can even say it can synthesize new information. We can agree with all the claims. But what does it do with that super-intelligence? Nothing. It can't. it doesn't have will. Or interest. Curiosity? Biological imperative. Who knows. So we create loops and introspection and set them free. Does giving AI a goal make the AI conscious? That's easily silly if you ask me. (I'm trying really hard not to make this philosophy. I really like the philosophy aspect, but this is my 30 second answer to the question) | |||||||||||||||||
| |||||||||||||||||