| ▲ | Show HN: Continue – Source-controlled AI checks, enforceable in CI(docs.continue.dev) | ||||||||||||||||
| 24 points by sestinj 2 hours ago | 5 comments | |||||||||||||||||
We now write most of our code with agents. For a while, PRs piled up, causing review fatigue, and we had this sinking feeling that standards were slipping. Consistency is tough at this volume. I’m sharing the solution we found, which has become our main product. Continue (https://docs.continue.dev) runs AI checks on every PR. Each check is a source-controlled markdown file in `.continue/checks/` that shows up as a GitHub status check. They run as full agents, not just reading the diff, but able to read/write files, run bash commands, and use a browser. If it finds something, the check fails with one click to accept a diff. Otherwise, it passes silently. Here’s one of ours:
This check passed without noise for weeks, but then caught a PR that would have silently deflated our session counts. We added it in the first place because we’d been burned in the past by bad data, only noticing when a dashboard looked off.--- To get started, paste this into Claude Code or your coding agent of choice:
It will:- Explore the codebase and use the `gh` CLI to read past review comments - Write checks to `.continue/checks/` - Optionally, show you how to run them locally or in CI Would love your feedback! | |||||||||||||||||
| ▲ | esafak an hour ago | parent | next [-] | ||||||||||||||||
This looks likes a more configurable version of the code review tools out there, for running arbitrary AI-powered tasks. Do you support exporting metrics to something standard like CSV? https://docs.continue.dev/mission-control/metrics A brief demo would be nice too. | |||||||||||||||||
| |||||||||||||||||
| ▲ | bachittle an hour ago | parent | prev [-] | ||||||||||||||||
Is this the same continue that was for running local AI coding agents? Interesting rebrand. | |||||||||||||||||
| |||||||||||||||||