| ▲ | iainctduncan 5 hours ago | |||||||
I think the worse situation is the bad AI summaries from search on health issues. We had a potential pet poisoning, so was naturally searching for resources. Google had a summary with a "dose of concern" that was an order of magnitude off. Someone could have read that and thought all was fine and had a dead cat. (BTW cat is fine, turned out to be a false alarm, but public service announcement: cats are alergic to aspirin and peptobismal has aspirin. don't leave demented plastic chewing cats around those bottles, in case you too have a lovely but demented cat) | ||||||||
| ▲ | cloud-oak 3 hours ago | parent | next [-] | |||||||
What's really worrying is seeing medical professionals starting to rely on these tools. My wife had a pretty bad cold during pregnancy and our GP proceeded to prescribe her cough syrup with high alcohol content, because that was what ChatGPT told him to prescribe. We only noticed it once she took the first dose and spit it out again... | ||||||||
| ||||||||
| ▲ | ep103 4 hours ago | parent | prev [-] | |||||||
I have literally never seen a correct google summary. Maybe y'all are searching for different things than i am, but at this point I've started taking the viewpoint that if I don't know why the ai summary is wrong, then i also don't know enough about the topic to trust its summary enough to determine whether the summary is useful. | ||||||||