Remix.run Logo
operatingthetan 14 hours ago

I think you may be going too far, as in your critiques assume the tech is further along than it actually is. There are three fundamental problems for mass AI adoption/AGI:

1. Lack of memory/continuity

2. Lack of agency

3. Lack of self-awareness

Based on my understanding of the basic 'loop' of an LLM, solutions for these may be decades off or not possible. Which leads me to the fourth problem:

4. Lack of compute

To get anywhere near AGI we need massive context windows. The whole thing is a mess.

neonstatic 13 hours ago | parent | next [-]

I think people really confuse their imagination and expectations with reality. There's so much talk about AGI and mass layoffs. Then there is my experience.

I was talking to Claude and ChatGPT, trying to fix an issue with a simple function in Rust, which is returning a boolean depending on day of week and time of day. The logic looked ok to me, but tests were failing. Notably, my real world data derived tests were succeeding, while brute-force/comprehensive tests written by Claude were failing. I wanted those "just to be sure". Both Claude and ChatGPT were spinning their wheels, introducing fixes, then undoing prior fixes, so on and so forth. They also updated tests. We were going from one failure to another, while they confidently reassured me that "this is the fix", they found the "crucial bug" etc. etc.

Turned out my logic was correct from the beginning. My tests were correct. Claude's tests were broken. I realized this by writing my own brute force test. Just a simple loop with asserts and printlns to see what is failing. I did what the machine was supposed to do for me. In less than 5 minutes I fine tuned the test to actually check what it was supposed to be checking and voila. The "fast" thinking machine episode took me 2 hours and only produced frustration. Sorry I should learn to speak the language - AI reduced my development velocity :)

The only poverty I see coming is from collapse of quality after these dumb machines are used to replace people, who actually know what they are doing.

operatingthetan 10 hours ago | parent [-]

And if the current models really are so great, why do we need to have a massive hype-train for each time the number goes up 0.1?

SpicyLemonZest 14 hours ago | parent | prev | next [-]

All three of these problems are thoroughly solved by widely available tools.

operatingthetan 14 hours ago | parent [-]

They are? Is your LLM ready to run your organization without further input from you or anyone? Do you realize that "memory" requires eating your hilariously small context window?

Have you not had a discussion with Opus where it insists it is correct about something it is objectively wrong about for several turns?

14 hours ago | parent | next [-]
[deleted]
SpicyLemonZest 14 hours ago | parent | prev [-]

That seems like an unreasonably high standard. I like to think that I have memory, agency, and self awareness, but I'm not ready to run my organization without further input from anyone.

> Do you realize that "memory" requires eating your hilariously small context window?

I do! LLMs are structured differently than humans, so the component we call "memory" corresponds to what humans call "short-term memory"; practical long-term memory for an LLM looks much more like what a human would call "let me write this down". But you can and commercially available systems do load it into context on demand when it's needed for some problem or another.

operatingthetan 14 hours ago | parent [-]

>memory, agency, and self awareness

The LLM only currently has the illusion of these things. Hence the bubble.

I know that you (or anyone) as a human being don't have the illusion of these things.

This is not like the car replacing the horse for transportation. The LLM as-is cannot fundamentally replace the person. They require the agency of a human to take turns at all, and even more so to enact change in the world.

Your LLM does not actively engage in the world because it does not experience anything. It only responds to queries. We can do a lot with that, but it's not intelligence. It can't say oh hey SpicyLemonZest, I was thinking and had an idea the other day. Because it has nothing between each query.

sumeno 14 hours ago | parent | prev [-]

[flagged]

operatingthetan 14 hours ago | parent [-]

A personal attack is not necessary. You don't seem to understand my perspective at all, please read some of my other comments.