| ▲ | synthc 2 hours ago | |
Very hit or miss. Stack: go, python Team size: 8 Experience, mixed. I'm using a code review agent which sometimes catches a critical big humans miss, so that is very useful. Using it to get to know a code base is also very useful. A question like 'which functions touch this table' or 'describe the flow of this API endpoint' are usually answered correctly. This is a huge time saver when I need to work on a code base i'm less familiar with. For coding, agents are fine for simple straightforward tasks, but I find the tools are very myopic: they prefer very local changes (adding new helper functions all over the place, even when such helpers already exist) For harder problems I find agents get stuck in loops, and coming up with the right prompts and guardrails can be slower than just writing the code. I also hates how slow and unpredictable the agents can be. At times it feels like gambling. Will the agents actually fix my tests, or fuck up the code base? Who knows, let's check in 5 minutes. IMO the worst thing is that juniors can now come up with large change sets, that seem good at a glance but then turn out to be fundamentally flawed, and it takes tons of time to review | ||