Remix.run Logo
SOLAR_FIELDS 4 hours ago

I got hooks working pretty well for simpler things, a very common hello world use case for hooks is gitleaks on every edit. One of the use cases I worked on for quite awhile was getting hooks that ran all unit tests at the end before the agent could stop generating. This approach forces the LLM to then fix any unit tests it broke and I also enforce 80% unit test coverage in same commit. I found it took a bit of finagling to get the hook to render results in a way that was actionable for the LLM because if you block it but it doesn’t know what to do it will basically endlessly loop or try random things to escape

FWIW I think your approach is great, I had definitely thought about leveraging OPA in a mature way, I think this kind of thing is very appealing for platform engineers looking to scale AI codegen in enterprises

ramoz 3 hours ago | parent [-]

Part of my initial pitch was to automate linting. Interesting insight on the stop loop. Ive been wanting to explore that more. I think there is a lot to be gained also with llm-as-a-judge hooks (they do enable this today via `prompt` hooks).

Ive had a lot of fun with random/creative hooks use cases: https://github.com/backnotprop/plannotator

I dont think the team meant for the hooks to work with plan mode this way (its not fully complete with approve/allow payload), but it enabled me to build an interactive UX I really wanted.