I think my message is doing a disservice to explaining what actually happened because a lot of it happens in my head.
1. I received the ticket, as soon as I read it I had a hunch it was related to some querying ignoring a field that should be filtered by every query (thinking)
2. I give this hunch to the AI which goes search in the codebase in the areas I suggested the problem could be and that's when it find the issue and provide a fix
3. I think the problem could be spread given there is a method that removes the query filter, it could have been used in multiple places, so I ask AI to find other usages of it (thinking, this is my definition of "steering" in this context)
4. AI reports 3 more occurrences and suggests that 2 have the same bug, but one is ok
5. I go in, review the code and understand it and I agree, it doesn't have the bug (thinking)
6. AI provide the fix for all the right spots, but I said "wait, something is fishy here, there is a commit that explicitly say it was added to remove the filter, why is that?" (thinking), so I ask AI to figure out why the commit says that
7. AI proceeds to run a bunch of git-history related commands, finds some commit and then does some correlation to find another commit. This other commit introduced the change at the same time to defend from a bug in a different place
8. I understand what's going on now, I'm happy with the fix, the history suggests I am not breaking stuff. I ask AI to write a commit with detailed information about the bug and the fix based on the conversation
There is a lot of thinking involved. What's reduced is search tooling. I can be way more fuzzy, rather than `rg 'whatever'` I now say "find this and similar patterns"