Remix.run Logo
rvz 8 hours ago

The problem here is that there are no viable solutions to what happens when AI eventually replaces (yes replaces) tens of millions of humans in white collar roles.

All that is being "promised" are vague claims of "abundance". But all I see is this:

"AGI" is going to bring abundance of lots of very angry people and UBI to no-one (because it can never work at a large sustainable scale).

Some people are starting to realise that "AGI" was a grift and a scam and they are not happy about this lie and the insiders knew that and increased spending on security and private bodyguards.

operatingthetan 8 hours ago | parent | next [-]

I don't think the LLM will produce AGI. Just based on how context windows work, the prompt cycle, etc. LLMs aren't out there thinking about stuff in their spare time. The way they appear to have thoughts and a psyche is purely an illusion.

fooqux 8 hours ago | parent | next [-]

Something I often think about is how we can barely define what AGI, consciousness, etc are. We may be pretty sure that what we have currently is an illusion, but at which point is the illusion good enough that it no longer matters? Especially with regards to my first question.

It's hard to say it's not X when we can't really define X.

ethanrutherford 7 hours ago | parent | next [-]

I would personally argue that it's a lot easier to say something definitely isn't x, with confidence, than to say it definitely is. I definitely don't know what the surface of jupiter looks like, but I can pretty confidently say it doesn't look like Kansas. I think the better it gets, the easier it will be to spot the shortcomings, because the gap between what it can do well and what it can't will widen. Anything the technology is fundamentally incapable of ever achieving will be made obvious by the fact that it will simply continue to not achieve it. We may not be able to easily define the totality of what exactly it needs to have to count as AGI, but the further it progresses, the easier it will be to point out individual things it's definitely missing.

operatingthetan 7 hours ago | parent | prev [-]

I'm not saying we can't build it, but what we have right now certainly is not it. Right now context is just a bunch of text. Surely the human mind's context resembles something more like a graph database. What if we could use a database for context?

andsoitis 7 hours ago | parent | prev | next [-]

> LLMs aren't out there thinking about stuff in their spare time.

Agentic changes the calculus.

operatingthetan 7 hours ago | parent [-]

Explain how? Even if you are using crons or heartbeats to reactivate the model they are still dependent on context windows that are quite small. With frontier models I still have to remind them how stuff works, stuff they forgot or focused on the wrong thing, etc.

Also every AI company is motivated to have us use their models _just enough_ to want to pay for them, but not more than that.

booleandilemma 8 hours ago | parent | prev [-]

It doesn't have to produce AGI and it could still ruin the lives of millions of people. Our society isn't ready for that kind of shock. We can't all be instagram influencers.

ericd an hour ago | parent [-]

It’s still pretty hard to get a contractor to show up and do a good job for less than a king’s ransom. Not that everyone can be a contractor, but there are lots of industries where there aren’t enough workers.

We also need to rebuild our manufacturing supply chains, and there’s a huge amount to be done there.

8 hours ago | parent | prev [-]
[deleted]