Remix.run Logo
terminalshort 13 hours ago

Why are you saying we shouldn't get AI advice without a "professional", then? Why is everybody here saying "in my experience it's just as good or better, but we need rules to make people use the worse option"? I have narcolepsy and it took a dozen doctors before they got it right. AI nails the diagnosis. Everybody should be using it.

RobertDeNiro 11 hours ago | parent | next [-]

I wonder if the reason AI is better at these diagnostics, is because the amount of time it spends with the patient is unbounded. Whereas a doctor is always restricted by the amount of time they have with the patient.

pinnochio 6 hours ago | parent [-]

I don't think we can say it's "better" based on a bunch of anecdotes, especially when they're coming exclusively from people who are more intelligent, educated, and AI-literate than most of the population. But it is true that doctors are far more rushed than they used to be, disallowed from providing the attentiveness they'd like or ought to give to each patient. And knowledge and skill vary across doctors.

It's an imperfect situation for sure, but I'd like to see more data.

pinnochio 13 hours ago | parent | prev | next [-]

Survivorship bias.

teitoklien 8 hours ago | parent [-]

Experience working with doctors a few times, and then we’ll see all the bias if one is still surviving lol. Doctors are some of the most corrupt professions who are more focused on selling drugs they get paid commission for to promote, or they obsess over tons and tons of expensive medical tests, that they themselves often know is not needed, except they ask for it, simply out of fear of courts suing them for negligence in future or because again , THEY GET A COMMISSION from the testing agencies for sending them clients.

And even with all of that info, they often come out with the wrong conclusions at times. Doctors do a critically important role in our society and during covid they risked their lives for us, more than anyone else, i do not want to insult or bring down the amount of hard work doctors do for their society.

But worshipping them as holier than thou gods is bullshit, that almost anyone who has spent some time with going back and forth with various doctors over the course of years will come to the conclusion of.

Having an AI assistant doesnt hurt, in terms of medical hints, we need to make having Personal Responsibility popular again, in society’s obsession for making every thing “idiot proof” or “baby proof” we keep losing all sorts of useful and interesting solutions because our politicians have a strong itch to regulate anything and everything they can get their hands on, to leave a mark on society.

pinnochio 7 hours ago | parent [-]

> But worshipping them as holier than thou gods is bullshit

I'd say the same about AI.

teitoklien 7 hours ago | parent [-]

> I'd say the same about AI.

And you’d be right, so society should let people use AI while warning them about all the risks related to it, without banning it or hiding it behind 10,000 lawsuits and making it disappear by coercion.

ares623 12 hours ago | parent | prev | next [-]

How do you hold the AI accountable when it makes a mistake? Can you take away its license "individually"?

terminalshort 12 hours ago | parent | next [-]

I would care about this if doctors were held accountable for their constant mistakes, but they aren't except in extreme cases.

bfLives 9 hours ago | parent | prev | next [-]

Does it matter? I’d rather use a 90% accurate tool than an 80% accurate one that I can subject to retribution.

mensetmanusman 9 hours ago | parent | prev [-]

If it makes a mistake? You’re not required to follow the AI, just use it as a tool for consideration.

ares623 9 hours ago | parent [-]

Doesn't sound very $1 trilliony

buu700 11 hours ago | parent | prev | next [-]

Aside from AI skepticism, I think a lot of it likely comes from low expectations of what the broader population would get out of it. Writing, reading comprehension, critical thinking, and LLM-fu may be skills that come naturally to many of us, but at the same time many others who "do their own research" also fall into rabbit holes and arrive at wacky conclusions like flat-Eartherism.

I don't agree with the idea that "we need rules to make people use the worse option" — contrary to prevailing political opinion, I believe people should be free to make their own mistakes — but I wouldn't necessarily rush to advocate that everyone start using current-gen AI for important research either. It's easy to imagine that an average user might lead the AI toward a preconceived false conclusion or latch onto one particular low-probability possibility presented by the AI, badger it into affirming a specific answer while grinding down its context window, and then accept that answer uncritically while unknowingly neglecting or exacerbating a serious medical or legal issue.

cpfohl 10 hours ago | parent | prev [-]

I’m saying that is a great tool for people who can see through the idiotic nonsense they so often make up. A professional _has_ the context to see through it.

It should empower and enable informed decisions not make them.