| ▲ | ryandrake 2 hours ago | |
I get what you're saying, and I agree it might be fun to play around with ChatGPT and Wikipedia and YouTube and WebMD to try to guess what that green bump on your arm is, but it's not research--it needs to be treated as entertainment. When it comes to taking actual real-world action, I would take 5-8 minutes with a real doctor over 5-8 months of browsing the Internet. The doctor has gone to med school, passed the boards, done his residency, and you at least have that as evidence that he might know what he is doing. The Internet offers no such evidence. I fear that our society in general is quickly entering a very dangerous territory where there's no such thing as expertise, and unaccountable, probabilistic tools and web resources of unknown provenience are seen as just as good as an expert in his field. | ||
| ▲ | gaudystead 2 hours ago | parent [-] | |
I don't disagree with you, but if I prompted an LLM to ask me questions like a doctor would for a non-invasive assessment, would it ask me better or worse questions than an actual doctor? I ask (somewhat rhetorically) to get the mind thinking, but I'm legitimately curious whether - just from a verbal survey - whether the AI doctor would ask me about things more directly related to any illness it might suspect, versus a human who might narrow factors down similar to a 90s TV "ghost speaker" type of person; one fishing for matches amongst a fairly large dataset. | ||