▲ | adastra22 2 days ago | |||||||||||||||||||||||||
I certainly hope my medical team is using AI tools, as they have been repeatedly demonstrated to be more accurate than doctors. Only downside is my last psychiatrist dropped me as a patient when he left his practice to start an AI company providing regulatory compliance for, essentially, Dr. ChatGPT. | ||||||||||||||||||||||||||
▲ | wobfan a day ago | parent | next [-] | |||||||||||||||||||||||||
> I certainly hope my medical team is using AI tools, as they have been repeatedly demonstrated to be more accurate than doctors. AI is not a new tool - transformer-based LLMs are. Which is what this post is about. The latter are very known to be a LOT LESS accurate, and still are very prone to hallucinate. This is just a fact. For your health I hope no one of your medical team is using the current generation for anything else than casual questions. I'm not an opponent, and I don't think straight up banning LLM-generated code commits is the right thing, but I can understand their stance. | ||||||||||||||||||||||||||
▲ | globular-toast a day ago | parent | prev [-] | |||||||||||||||||||||||||
Honestly it just sounds like you've been sold on "AI" being a thing and don't have any idea how any of it works. I don't even know what you're referring to with "more accurate than doctors". Classifying scans or something? Do you realise how different that is to generative LLMs writing code etc? Scan classification may well have been shown to be more accurate, but generative LLMs have never been shown to be "better" than humans and in fact it's easy to demonstrate they are much, much worse in many ways. | ||||||||||||||||||||||||||
|