This continues to be the most tiring response to any criticism of LLM output. It's pretty much guaranteed to show up at this point. I guess with similar enough input tokens, we're guaranteed the same output...