Remix.run Logo
jedberg 8 hours ago

> You realize that stamina is a core bottleneck to work

There has been a lot of research that shows that grit is far more correlated to success than intelligence. This is an interesting way to show something similar.

AIs have endless grit (or at least as endless as your budget). They may outperform us simply because they don't ever get tired and give up.

Full quote for context:

Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day. It's a "feel the AGI" moment to watch it struggle with something for a long time just to come out victorious 30 minutes later. You realize that stamina is a core bottleneck to work and that with LLMs in hand it has been dramatically increased.

dust42 14 minutes ago | parent | next [-]

> AIs have endless grit (or at least as endless as your budget).

That is the only thing he doesn't address: the money it costs to run the AI. If you let the agents loose, they easily burn north of 100M tokens per hour. Now at $25/1M tokens that gets quickly expensive. At some point, when we are all drug^W AI dependent, the VCs will start to cash in on their investments.

Loeffelmann an hour ago | parent | prev [-]

If you ever work with LLMs you know that they quite frequently give up.

Sometimes it's a

    // TODO: implement logic
or a

"this feature would require extensive logic and changes to the existing codebase".

Sometimes they just declare their work done. Ignoring failing tests and builds.

You can nudge them to keep going but I often feel like, when they behave like this, they are at their limit of what they can achieve.

jedberg 6 minutes ago | parent | next [-]

> If you ever work with LLMs you know that they quite frequently give up.

If you try to single shot something perhaps. But with multiple shots, or an agent swarm where one agent tells another to try again, it'll keep going until it has a working solution.

energy123 35 minutes ago | parent | prev [-]

Using LLMs to clean those up is part of the workflow that you're responsible for (... for now). If you're hoping to get ideal results in a single inference, forget it.