Remix.run Logo
apsurd 4 hours ago

The most concise answer as of now is because AI has no "will".

LLMs are objectively smarter than any one person so in some definition we've already created super-intelligence. The problem is they just sit there. They have all the answers already, if you think about it. Whenever we ask it something it gives us the answer, it's amazing, we can even say it can synthesize new information. We can agree with all the claims.

But what does it do with that super-intelligence? Nothing. It can't. it doesn't have will. Or interest. Curiosity? Biological imperative. Who knows.

So we create loops and introspection and set them free. Does giving AI a goal make the AI conscious? That's easily silly if you ask me.

(I'm trying really hard not to make this philosophy. I really like the philosophy aspect, but this is my 30 second answer to the question)

fragmede 2 hours ago | parent [-]

The singularity won't happen because sticking a cron job in front of an LLM and telling it to do something (make money) is "silly"?

I am no philosopher but https://poc.bcachefs.org/ seems conscious.

apsurd 2 hours ago | parent [-]

It's not.

It's no more conscious than running that cron job to send you today's weather. That's as far as I understand what this link is. The agent is posting blog updates and such. Because it was told to. It has no will. LLM generative output is incredible. It's also not conscious.