| ▲ | Orchestrion 12 hours ago | ||||||||||||||||||||||
The Unix-style framing resonates a lot. One thing I’ve noticed when experimenting with agent pipelines is that the “single-purpose agent” model tends to make both cost control and reasoning easier. Each agent only gets the context it actually needs, which keeps prompts small and behavior easier to predict. Where it gets interesting is when the pipeline starts producing artifacts instead of just text — reports, logs, generated files, etc. At that point the workflow starts looking less like a chat session and more like a series of composable steps producing intermediate outputs. That’s where the Unix analogy feels particularly strong: small tools, small contexts, and explicit data flowing between steps. Curious if you’ve experimented with workflows where agents produce artifacts (files, reports, etc.) rather than just returning text. | |||||||||||||||||||||||
| ▲ | jrswab 12 hours ago | parent [-] | ||||||||||||||||||||||
> Curious if you’ve experimented with workflows where agents produce artifacts (files, reports, etc.) rather than just returning text. Yes! I run a ghost blog (a blog that does not use my name) and have axe produce artifacts. The flow is: I send the first agent a text file of my brain dump (normally spoken) which it then searched my note system for related notes, saves it to a file, then passes everything to agent 2 which make that dump a blog draft and saves it to a file, agent 3 then takes that blog draft and cleans it up to how I like it and saves it. from that point I have to take it to publish after reading and making edits myself. | |||||||||||||||||||||||
| |||||||||||||||||||||||