| ▲ | Voloskaya 2 days ago |
| “Agent” involves having agency.
Calling the GPT-3 API and asking it to do some classification or whatever else your use case was, would not be considered agentic.
Not only were there no tools back then to allow an LLM to carry out a plan of its own, even if you had developed your own, GPT-3 still sucked way too much to trust it with even basic tasks. I have been working on LLMs since 2017, both training some of the biggest and then creating products around them and consider I have no experience with agents. |
|
| ▲ | noosphr 2 days ago | parent | next [-] |
| All llms still suck too much to trust them with basic tasks without human in the loop. The only people who don't realize this are the ones whose paycheck depends on them not understanding it. |
| |
| ▲ | Voloskaya 2 days ago | parent [-] | | I don't necessarily disagree, my point is more that today you can realistically let an agent do several steps and use several tools, following a plan of it's own, before doing a manual review (e.g. Claude Code followed by a PR review). After all an intern has agency, even if I'm going to double check everything they do. GPT-3, while being impressive at the time, was too bad to even let it do that, it would break after 1 or 2 steps, so letting it do anything by itself would have been a waste of time where the human in the loop would always have to re-do everything. It's planning ability was too bad and hallucinations way to frequent to be useful in those scenarios. |
|
|
| ▲ | nunodonato 2 days ago | parent | prev [-] |
| In defense of the previous commenter, I also started with GPT3. I had tool calling and reasoning before chatgpt even came out. So yeah, there was a lot that could be done before the models started integrating it |
| |
| ▲ | Voloskaya a day ago | parent [-] | | > I had tool calling and reasoning before chatgpt even came out. Do you know of any kind of write up (by you or someone else) on this topic? Admittedly I never really spent too much time on this since I was working on pre-training, but I did try to do a few smart things with it and it pretty much failed at every thing, in big part because it wasn't even instruction tuned, so was very much still an autocomplete model. So would be curious to learn more about how people got it to succeeed at agentic behaviors. | | |
| ▲ | nunodonato a day ago | parent [-] | | I used to vlog my experiments. Not really a very scientific write up on the topic, mostly just ramblings while experimenting cool stuff |
|
|