Remix.run Logo
Principles for Building One-Shot AI Agents(edgebit.io)
91 points by robszumski 5 days ago | 30 comments
TZubiri 2 days ago | parent | next [-]

What is a “one-shot” AI Agent? A one-shot AI agent enables automated execution of a complex task without a human in the loop.

Not at all what one-shot means in the field. Zero-shot, one-shot and many-shot means how many examples at inference time are needed to perform a task

Zero shot: "convert these files from csv to json"

One shot: "convert from csv to json, like "id,name,age/n1,john,20" to {id:"1",name:"tom",age:"20"}

devmor 2 days ago | parent | next [-]

Given the misunderstandings and explanation of how they struggled with a long-solved ml problem, I believe this article was likely written by someone without much formal experience in AI.

This is probably a case where some educational training could have saved the engineer(s) involved a lot of frustration.

zavec 2 days ago | parent [-]

As a casual ML non-practicioner, what was the long-solved ML problem they ran up against?

devmor 2 days ago | parent [-]

Both “Principle 1” and “Principle 2” in the article are essentially LLM-focused details of basic principles in ML that have been known since before I (and probably you, if you’re still working age) were born.

robszumski 2 days ago | parent | prev [-]

Fair criticism. I was going for the colloquial usage of "you get one shot" but yeah I did read that Google paper the other day referring to these as zero-shot.

tough 2 days ago | parent [-]

fully-autonomous makes more sense in the agentic vicab imho

at the end its fine if the agent self corrects amongst many shots too

sebastiennight 2 days ago | parent | prev | next [-]

> A different type of hard failure is when we detect that we’ll never reach our overall goal. This requires a goal that can be programmatically verified outside of the LLM.

This is the largest issue : using LLMs as a black box means for most goals, we can't rely on them to always "converge to a solution" because they might get stuck in a loop trying to figure out if they're stuck in a loop.

So then we're back to writing in a hardcoded or deterministic cap on how many iterations counts as being "stuck". I'm curious how the authors solve this.

NoTeslaThrow 2 days ago | parent | next [-]

Surely the major issue is thinking you've converged when you haven't. If you're unsure if you've converged you can just bail after n iterations and say "failed to converge".

bhl 2 days ago | parent | prev | next [-]

Just give your tool call loop to a stronger model to check if it’s a loop.

This is what I’ve done working with smaller model: if it fails validation once, I route it to a stronger model just for that tool call.

behnamoh 2 days ago | parent [-]

> if it fails validation once, I route it to a stronger model just for that tool call.

the problem the GP was referring to is that even the large model might fail to notice it's struggling to solve a task and keep trying more-or-less the same approaches until the loop is exhausted.

sebastiennight 2 days ago | parent [-]

Exactly. You'd still be in a non-deterministic loop, just a more expensive one.

namaria 2 days ago | parent [-]

Ashby in 1958 pointed out the law of requisite variety. It should have preempted expert systems and it should preempt the current agents fad. An automatic control system of general application would tend toward infinite complexity.

randysalami 2 days ago | parent | prev | next [-]

I think we need quantum systems to ever break out of that issue.

EDIT: not as to creating an agent that can do anything but creating an agent that more reliably represents and respects its reality, making it easier for us to reason and work with seriously.

sebastiennight 2 days ago | parent | next [-]

Could you share the logic behind that statement?

Because here I'm getting "YouTuber thumbnail vibes" at the idea of solving non-deterministic programming by selecting the one halting outcome out of a multiverse of possibilities

dullcrisp 2 days ago | parent | next [-]

ELI40 “YouTuber thumbnail vibes?”

sebastiennight 2 days ago | parent | next [-]

YouTube's algorithm has created over the last ~5 years an entire cottage industry of click-maximizing content creators who take any interesting scientific discovery or concept, turn it into the maximally hypey claim they can, and make that the title of their videos with a "shocked-face" thumbnail.

E.g. imagine an arxiv paper from French engineer sebastiennight:

     Using quantum chips to mitigate halting issues on LLM loops
It would result the same day in a YT video like this:

     Thumbnail: (SHOCKED FACE of Youtuber clasping their head next to a Terminator robot being crushed by a flaming Willow chip)
     Title: French Genius SHOCKS the AI industry with Google chip hack!
pmichaud 2 days ago | parent | prev [-]

I think he means just try shit until something works better.

randysalami 2 days ago | parent | prev [-]

That would be some Dr. Strange stuff. I’m just saying a quantum AI agent would be more grounded when deciding when to stop based on the physical nature of their computation vs. engineering hacks we need for current classical systems that become inherently inaccurate representations of reality. I could be wrong.

daxfohl 2 days ago | parent [-]

Quantum computation is no different than classical, except the bit registers have the ability to superpose and entangle, which allows certain specific algorithms like integer factorization to run faster. But conceptually it's still just digital code and an instruction pointer. There's nothing more "physical" about it than classical computing.

daxfohl 2 days ago | parent [-]

And it's definitely not "try every possibility in parallel", as is sometimes portrayed by people who don't know better. While quantum computing makes it possible to superpose multiple possibilities, the way quantum mechanics works, you can only measure one (and you have to decide ahead of time which one to measure, i.e. you can't ask the quantum system like "give me the superposition with the highest value"). That's why only a few specific algorithms are aided by quantum computing at all. Integer factorization (or more generally, anything that uses Fourier transforms) is the biggest, where it's exponential speedup, but most others are just quadratic speedup.

And even if you could simulate and measure multiple things in parallel, that still wouldn't let you solve the halting problem, which would require simulating and measuring infinite things in parallel.

Another way of saying it: everything that can be done on a quantum computer can also be done on a classical computer. It's just that some specific algorithms can be done much faster on a quantum computer, and in the case of integer factorization, a quantum computer could factor numbers larger than would ever be practical on a classical computer. But that's really it. There's nothing magical about them.

randysalami 12 hours ago | parent [-]

“Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy” (Richard Feynman). Quantum systems are physical systems, classical systems due to their very nature only can emulate it. When it comes to agents like we were discussing before, a classical agent will always be limited by the abstractions needed to get it to understand the real world. A quantum agent would actually “get” the world. The difference is fidelity and classic systems will only ever be an approximation.

devmor 2 days ago | parent | prev [-]

I don’t believe quantum computers can solve the halting problem, so I don’t think that would actually help.

This issue will likely always require a monitor “outside” of the agent.

randysalami 2 days ago | parent [-]

I think you’re right that they can’t “solve” the halting problem but are more capable at dealing with it than classic ai agents and more physically grounded. Outside monitoring would be required but I’d imagine less so than classical systems and in physically different ways; and to be fair, humans require monitoring too if they should halt or not, haha.

devmor 2 days ago | parent [-]

Can you explain why you think this? I’m curious.

Humans don’t encounter an infinite loop problem because we are not “process-bound” - we may spend too long on a task but ultimately we are constantly evaluating whether or not we should continue (even if our priorities may not be the same as whoever assigned us a task). The monitoring is built-in, by nature of our cross-task processing.

randysalami 12 hours ago | parent [-]

100%. We have built-in faculties for stopping and halting. My point wasn’t that humans physically need a monitor to determine when to stop or else suffer an infinite loop; sleep, eating, and death are perfectly effective at that. I was making a bit of a joke in the efficacy of agents being subjective around halting. A classical or quantum agent might go on forever to solve its goal, getting stuck and needing an outside monitor to reset or redefine it. Contrast that to a human agent; given a goal, they might never even try to solve it in the first place! Without outside monitors, systems of human agents may not start when needed or halt when optimal yet we’ve kept it going for thousands of years!

2 days ago | parent | prev [-]
[deleted]
robszumski 2 days ago | parent | prev | next [-]

Author of the post, love to see this here.

Curious what folks are seeing in terms of consistency of the agents they are building or working with – it's definitely challenging.

lerp-io 2 days ago | parent | prev [-]

u can’t one shot anything, you have to iterate many many times.

canadiantim 2 days ago | parent [-]

You one-shot it, then you iterate.

Sounds tautological but you want to get as far as possible with the one-shot before iterating, because one-shot is when the results have the most integrity

tough 2 days ago | parent [-]

tighter feedback loops the closer your shotting instances are to each other