At least if you're on reddit you've got a good chance of Cunningham's Law[1] giving you a chance at realizing it's not cut and dry. In this case, I refer to what you might call a reduced-strength version of Cunningham's Law, which I would phrase as "The best way to get the right answer on the Internet is not to ask a question; it's to post what someone somewhere thinks is the wrong answer." my added strength reduction in italics. At least if you stumble into a conversation where people are arguing it is hard to avoid needing to apply some critical thought to the situation to parse out who is correct.
The LLM-only AI just hands you a fully-formed opinion with always-plausible-sounding reasons. There's no cognitive prompt to make you consider if it's wrong. I'm actually deliberately cultivating an instinctive negative distrust of LLM-only AI and would suggest it to other people because even though it may be too critical on a percentage basis, you need it as a cognitive hack to remember that you need to check everything coming out of them... not because they are never right but precisely because they are often right, but nowhere near 100% right! If they were always wrong we wouldn't have this problem, and if they were just reliably 99.9999% right we wouldn't have this problem, but right now they sit in that maximum danger zone of correctness where they're right enough that we cognitively relax after a while, but they're nowhere near right enough for that to be OK on any level.
[1]: https://en.wikipedia.org/wiki/Ward_Cunningham#Law