| ▲ | midnight_eclair 5 days ago | |
even if i would generally agree with the principles, no amount of markdown prompting is going to increase my confidence in agent's output and so i keep asking this question: > what do you use for normative language to describe component boundary, function and cross-component interactions? something i can feed into a deterministic system that will run a generative suite of tests (quickcheck/hypothesis/clojure.spec) that will either give me the confidence or give the agent the feedback. | ||
| ▲ | cyrusradfar 5 days ago | parent [-] | |
OP / Author here, I started closer to where you are but ended up realizing I've led a few eng teams and was never satisfied with code quality. What I COULD be satisfied by was moving our metrics in the right direction. Testing coverage, use cases covered by E2E / integration tests, P99/backend efficiency metrics, cost of infrastructure and obviously user growth along with positive feedback from Users. That said, I don't "vibe" because it creates great code I love reading, but I can monitor and move the same metrics I would if I was managing a team. I also use code tours a bit, and one of my first tools I needed and built (intraview.ai) was to support this need to get deep in the code the Agents were claiming was ready to ship. | ||