This is depressing.
We need different language for describing things AI did for us vs things we figured out ourselves. When a human presents work under their own name, there is an unspoken but widely relied-upon assumption that the presenter has exercised judgment over the space of possible choices and can explain why these ones were taken.
In other words, we naturally assume they engaged with the problem space deeply enough to justify the decisions made.
I think AI-produced code and investigation needs a disclaimer, and I say that as someone who uses vibe coding a lot to produce tooling used in our development process.
If you didn’t do it or write it yourself, you don’t understand it as well as if you had. If you didn’t look at the output in great detail and understand every choice made, you really shouldn’t be putting your name on it — or staking your reputation on it — without a pretty clear disclaimer.
And if you present an investigation done by AI as something done by yourself, you’re not really providing human insights. (Almost) anyone can drive an AI, and there’s not a lot of value there for your audience if you don’t disclose that’s what you did.
If you attach your name to work, you are asserting that you can meaningfully answer “why this and not something else?” across the decisions that matter. Tools that produce answers faster than humans think require new language, because our old words still imply thought occurred.