| ▲ | tiffanyh a day ago | ||||||||||||||||||||||||||||||||||
> Then AIP reads the agent's reasoning between every action and compares it to that contract. How would this work? Is one LLM used to “read” (and verify) another LLMs reasoning? | |||||||||||||||||||||||||||||||||||
| ▲ | alexgarden a day ago | parent [-] | ||||||||||||||||||||||||||||||||||
Yep... fair question. So AIP and AAP are protocols. You can implement them in a variety of ways. They're implemented on our infrastructure via smoltbot, which is a hosted (or self-hosted) gateway that proxies LLM calls. For AAP it's a sidecar observer running on a schedule. Zero drag on the model performance. For AIP, it's an inline conscience observer and a nudge-based enforcement step that monitors the agent's thinking blocks. ~1 second latency penalty - worth it when you must have trust. For both, they use Haiku-class models for intent summarization; actual verification is via the protocols. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||