▲ | malfist 3 days ago | ||||||||||||||||||||||
I was feeding Gemini faux physicians notes trying to get it to produce diagnosises, and every time I feed it new information it told me how great I was at taking comprehensive medical notes. So irritating. It also had a tendency to tell me everything was a medical crisis and the patient needed to see additional specialists ASAP. At one point telling me that a faux patient with normal A1C, fasted glucose and no diabetes needed to see an endocrinologist because their nominal lab values indicated something was seriously wrong with their pancreas or liver because the patient was extremely physically active. Said they were "wearing the athlete mask" and their physical fitness was hiding truly terrible labs. I pushed back and told it it was overreacting and it told me I was completely correct and very insightful and everything was normal with the patient and that they were extremely healthy. | |||||||||||||||||||||||
▲ | notahacker 3 days ago | parent | next [-] | ||||||||||||||||||||||
And then those sort of responses get parlayed into "chatbots give better feedback than medical doctors" headlines according to studies that rate them as high in "empathy" and don't worry about minor details like accuracy.... | |||||||||||||||||||||||
▲ | cvwright 3 days ago | parent | prev | next [-] | ||||||||||||||||||||||
This illustrates the dangers of training on Reddit. | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | cubefox 3 days ago | parent | prev [-] | ||||||||||||||||||||||
I recently had Gemini disagree with me on a point about philosophy of language and logic, but it phrased the disagreement very politely, by first listing all the related points in which it agreed, and things like that. So it seems that LLM "sycophancy" isn't necessarily about dishonest agreement, but possibly about being very polite. Which doesn't need to involve dishonesty. So LLM companies should, in principle, be able to make their models both subjectively "agreeable" and honest. |