| ▲ | mark_l_watson 13 hours ago | |
If I have time I want to try this today because it matches my LLM-based work style, especially when I am using local models: I have command line tools that help me generated large one-shot prompts that I just paste into an Ollama repl - then I check back in a while. It looks like Axe works the same way: fire off a request and later look at the results. | ||
| ▲ | jrswab 12 hours ago | parent [-] | |
Exactly! I also made it to chain them together so each agent only gets what it needs to complete its one specific job. | ||