| ▲ | jononor 8 days ago |
| "LLM" as well, because coding agents are already more than just an LLM. There is very useful context management around it, and tool calling, and ability to run tests/programs, etc. Though they are LLM-based systems, they are not LLMs. |
|
| ▲ | smnrchrds 8 days ago | parent | next [-] |
| Indeed. If the LLM calls a chess engine tool behind the scenes, it would be able to play excellent chess as well. |
| |
| ▲ | cavisne 8 days ago | parent [-] | | The author would still be wrong in the tool-calling scenario. There is already perfect (or at least superhuman) chess engines. There is no perfect "coding engine". LLM's + tools being able to reliably work on large codebases would be a new thing. | | |
| ▲ | yosefk 8 days ago | parent [-] | | Correct - as long as the tools the LLM uses are non-ML-based algorithms existing today, and it operates on a large code base with no programmers in the loop, I would be wrong. If the LLM uses a chess engine, then it does nothing on top of the engine; similarly if an LLM will use another system adding no value on top, I would not be wrong. If the LLM uses something based on a novel ML approach, I would not be wrong - it would be my "ML breakthrough" scenario. If the LLM uses classical algorithms or an ML algo known today and adds value on top of them and operates autonomously on a large code base - no programmer needed on the team - then I am wrong |
|
|
|
| ▲ | interstice 8 days ago | parent | prev [-] |
| This rapidly gets philosophical. If I use tools am I not handling the codebase? Are we classing LLM as tool or user in this scenario? |