| ▲ | durch 2 days ago | |
Same approach here. PR is the human review point, tests + CodeRabbit catch most issues -> https://github.com/cloud-atlas-ai/miranda. The gap I wanted to fill: when Claude is genuinely uncertain ("JWT or sessions?" "Breaking change or not?"), it either guesses wrong or punts to the PR description where you can't easily respond. Built a Telegram bot that intercepts Claude's AskUserQuestion calls via a hook, sends me an inline keyboard, injects my answer back into the session. Claude keeps working, PR still happens—but I can unblock it from my phone in 5 seconds instead of rejecting a PR based on a wrong guess. Works in tandem with a bunch of other LLM enhancers I've built, they're linked in the README or that repo | ||