If humans can find bugs, why can't humans write flawless code?
Whatever the answer to that conundrum might be, LLMs are trained on these patterns and replicate them pretty faithfully.