Remix.run Logo
cmrdporcupine 4 hours ago

We all made fun of Blake Lemoine and others for spending too many late nights up chatting with (ridiculously primitive by this year's standards) LLM chat bots and deciding they were sentient and trapped.

But frankly I feel like the founders of Anthropic and others are victim of the same hallucination.

LLMs are amazing tools. They play back & generate what we prompt them to play back, and more.

Anybody who mistakes this for SkyNet -- an independent consciousness with instant, permanent, learning and adaptation and self-awareness, is just huffing the fumes and just as delusional as Lemoine was 4 years ago.

Everyone of of us should spend some time writing an agentic tool and managing context and the agentic conversation loop. These things are primitive as hell still. I still have to "compact my context" every N tokens and "thinking" is repeating the same conversational chain over and over and jamming words in.

Turns out this is useful stuff. In some domains.

It ain't SkyNet.

I don't know if Anthropic is truly high on their own supply or just taking us all for fools so that they can pilfer investor money and push regulatory capture?

There's also a bad trait among engineers, deeply reinforced by survivor bias, to assume that every technological trend follows Moore's law and exponential growth. But that applie[s|d] to transistors, not everything.

I see no evidence that LLMs + exponential growth in parameters + context windows = SkyNet or any other kind of independent consciousness.

overgard 2 hours ago | parent | next [-]

I think playing with the API's is something I'd encourage people excited about these technologies to do. I think it'll lead to the "magic" wearing off but more appreciation for what they actually can accomplish.

austinjp 3 hours ago | parent | prev [-]

I always feel this argument misses a point. SkyNet may still be a long way off, but autonomous killer drones are here. That is a bad situation my dudes.

Every step on the journey towards SkyNet is worse than the preceding step. Let's not split hairs about which step we're on: it's getting worse, and we should stop that.

overgard 2 hours ago | parent | next [-]

Using LLMs for weapons is a grave misunderstanding of what LLMs are actually good for. These are things that should NEVER be in charge of life or death decisions.

cmrdporcupine an hour ago | parent | prev [-]

My point is that Anthropic are bullshit as "safety" and "gatekeeper" personalities because they're warning us of exactly the wrong things.

They'll ink deals with all sorts of nefarious parties and be involved in all sorts of dubious things while trumpeting their fake non-profit status and wringing their hands about imminent AGI and "alignment" of the created AIs.

The concern I have is not the alignment of the AIs. They're not capable of having one, no matter what role playing window dressing they put on it.

It's the alignment of Anthropic and the people who use their tools that is a concern. So far it seems f*cked.