| ▲ | ramoz 4 hours ago | |||||||
I agree with this take. Esp with the simplicity of /sandbox I created the feature request for hooks so I could build an integrated governance capability. I don’t quite yet think the real use cases for hooks has materialized. Through a couple more maturity phases it will. Even though it might seem paradoxical with “the models will just get better” - to which is exactly why we have to be hooked into the mech suits as they'll end up doing more involved things. But I do pitch my initial , primitive, solution as “an early warning system” at best when used for security , but more so an actual way (opa/rego) to institute your own policies: | ||||||||
| ▲ | SOLAR_FIELDS 4 hours ago | parent [-] | |||||||
I got hooks working pretty well for simpler things, a very common hello world use case for hooks is gitleaks on every edit. One of the use cases I worked on for quite awhile was getting hooks that ran all unit tests at the end before the agent could stop generating. This approach forces the LLM to then fix any unit tests it broke and I also enforce 80% unit test coverage in same commit. I found it took a bit of finagling to get the hook to render results in a way that was actionable for the LLM because if you block it but it doesn’t know what to do it will basically endlessly loop or try random things to escape FWIW I think your approach is great, I had definitely thought about leveraging OPA in a mature way, I think this kind of thing is very appealing for platform engineers looking to scale AI codegen in enterprises | ||||||||
| ||||||||