| ▲ | AuthAuth 5 hours ago | |
There are certain domains where we can accept a high degree of mistakes and health is not one of them. People are reacting they way they are because its obvious that LLMs are currently not reliable enough to be trusted to distrubute health advice. To me it doesnt matter that ChatGPT health sometimes gives good advice or some people _feel_ like it helped them. I'm not sure I even trust people when they say that given how much the LLM just affirms whatever they tell it. | ||