| ▲ | xkriva11 2 hours ago | |
Smalltalk offers several excellent features for LLM agents: - Very small methods that function as standalone compilation units, enabling extremely fast compilation. - Built-in, fast, and effective code browsing capabilities (e.g., listing senders, implementors, and instance variable users...). This makes it easy for the agent to extract only the required context from the system. - Powerful runtime reflectivity and easily accessible debugging capabilities. - A simple grammar with a more natural, language-like feel compared to Lisp. - Natural sandboxing | ||
| ▲ | cess11 an hour ago | parent [-] | |
If someone wants to try it out, both Glamorous Toolkit and plain Pharo have tooling that allows integration of both local and remote LLM services. Some links to start off with: https://github.com/feenkcom/gt4llm https://omarabedelkader.github.io/ChatPharo/ Edit: I suppose the next step would be to teach an LLM about "moldable exceptions", https://arxiv.org/pdf/2409.00465 (PDF), have it create its own debuggers. | ||