I've found that when cross checked against my own expertise, LLMs have dubious "knowledge" at best. Trusting the output with anything you already don't know would just be Gell-Mann amnesia.