Remix.run Logo
laterium 2 days ago

The issue you're overlooking is the scarcity of experts. You're comparing the current situation to an alternative universe where every person can ask a doctor their questions 10 times a day and instantly get an accurate response.

That is not the reality we're living in. Doctors barely give you 5 minutes even if you get an appointment days or weeks in advance. There is just nobody to ask. The alternatives today are

1) Don't ask, rely on yourself, definitely worse than asking a doctor

2) Ask an LLM, which gets you 80-90% of the way there.

3) Google it and spend hours sifting through sponsored posts and scams, often worse than relying on yourself.

The hallucinations that happen are massively outweighed by the benefits people get by asking them. Perfect is the enemy of good enough, and LLMs are good enough.

Much more important also is that LLMs don't try to scam you, don't try to fool you, don't look out for their own interests. Their mistakes are not intentional. They're fiduciaries in the best sense, just like doctors are, probably even more so.

ozgung a day ago | parent | next [-]

Chronologically, our main sources of information have been:

1. People around us

2. TV and newspapers

3. Random people on the internet and their SEO-optimized web pages

Books and experts have been less popular. LLMs are an improvement.

ahartmetz a day ago | parent | next [-]

Interesting point, actually - LLMs are a return to curated information. In some ways. In others, they tell everyone what they want to hear.

martin-t a day ago | parent | prev [-]

> LLMs are an improvement.

Unless somebody is using them to generate authoritative-sounding human-sounding text full of factoids and half-truths in support of a particular view.

Then it becomes about who can afford more LLMs and more IPs to look like individual users.

georgefrowny a day ago | parent | prev | next [-]

> Much more important also is that LLMs don't try to scam you, don't try to fool you, don't look out for their own interests.

When the appreciable-fraction-of-GDP money tap turns off, there going to be enormous pressure to start putting a finger on the scale here.

And AI spew is theoretically a fantastic place to insert almost subliminal contextual adverts on a way that traditional advertising can only dream about.

Imagine if it could start gently shilling a particular brand of antidepressant if you started talking to it about how you're feeling lonely and down. I'm not saying you should do that, but people definitely do.

And then multiply by every question you doing ask. Ask about do you need tyres. "Yes, you should absolutely change tyres every year, whether noticeably worn or not. KwikFit are generally considered the best place to have this done. Of course I know you have a Kia Picanto - you should consider that actually a Mercedes C class is up to 200% lighter on tyre wear. I have searched and found an exclusive 10% offer at Honest Jim's Merc Mansion, valid until 10pm. Shall I place an order?"

Except it'll be buried in a lot more text and set up with more subtlety.

lithocarpus 16 hours ago | parent | next [-]

I've been envisioning a market for agendas, where the players bid for the AI companies to nudge their LLM toward whatever given agenda. It would be subtle and not visible to users. Probably illegal, but I imagine it will happen to some degree. Or at the very least the government will want the "levers" to adjust various agendas the same way they did with covid.

I despise all of this. For the moment though, before all this is implemented, it's perhaps a brief golden age of LLMs usefulness. (And I'm sure LLMs will remain useful for many things, but there will be entire categories where they're ruined by pay to play the same as happened with Google search.)

otabdeveloper4 a day ago | parent | prev | next [-]

> When the appreciable-fraction-of-GDP money tap turns off, there going to be enormous pressure to start putting a finger on the scale here.

Yeah, back in the day before monetization Internet pages were informative, reliable and ad-free too.

georgefrowny a day ago | parent [-]

One difference is that the early internet was heavily composed of enthusiastic individuals. AI is almost entirely corporate and money-focused.

Even most hobby AI projects mostly seem to have an eye on being a side hustle or CV buffing.

Perhaps it's because even in the 90s you could serve a website for basically free (once you had the server). AI today has a noticeable per-user cost.

otabdeveloper4 18 minutes ago | parent [-]

> AI is almost entirely corporate and money-focused.

This is untrue. There's a huge landscape of locally-hosted AI stuff, and they're actually doing real interesting research. The problem is that 99% of it is pornography-focused, so understandably it's very underground.

chickensong a day ago | parent | prev [-]

> Imagine if it could start gently shilling a particular brand of antidepressant if you started talking to it about how you're feeling lonely and down. I'm not saying you should do that, but people definitely do.

Doctors already shill for big pharma. There are trust issues all the way down.

johnecheck a day ago | parent | next [-]

> There are trust issues all the way down.

Nonetheless, we must somehow build trust in others and denounce the undeserving. Some humans deserve trust. Will these AI models?

markdown a day ago | parent | prev [-]

> Doctors already shill for big pharma.

This is not the norm worldwide.

chickensong a day ago | parent [-]

I hope you're right and that it remains that way, but TBH my hopes aren't high.

Big pharma corps are multinational powerhouses, who behave like all other big corps, doing whatever they can to increase profits. It may not be direct product placement, kickbacks, or bribery on the surface, but how about an expense-paid trip to a sponsored conference or a small research grant? Soft money gets their foot in the door.

thayne a day ago | parent | prev | next [-]

But he LLM was probably trained on all the sponsered posts and scams. It isn't clear to me that an LLM response is any more reliable than sifting through google results.

dgemm 2 days ago | parent | prev | next [-]

This seems true for our moment in time but looking forward I'm not sure how much it will stay that way. The LLMs will inevitably need to find a sustainable business model so I can very much see them becoming enshittified similar to google eventually making 2) and 3) more similar to each other.

jonas21 a day ago | parent [-]

An alternative business model is that you, or more likely your insurance, pays $20/mo for unlimited access to a medical agent, built on top of an LLM, that can answer your questions. This is good for everyone -- the patient gets answers without waiting, the insurer gets cost savings, doctors have a less hectic schedule and get to spend more time on the interesting cases, and the company providing the service gets paid for doing a good job -- and would have a strong incentive to drive hallucination rate down to zero (or at least lower than the average physician's).

TheOtherHobbes a day ago | parent | next [-]

The medical industry relies on scarcity and it's also heavily regulated, with expensive liability insurance, strong privacy rules, and a parallel subculture of fierce negligence lawyers who chase payouts very aggressively.

There is zero chance LLMs will just stroll into this space with "Kinda sorta mostly right" answers, even with external verification.

Doctors will absolutely resist this, because it means the impending end of their careers. Insurers don't care about cost savings because insurers and care providers are often the same company.

Of course true AGI will eventually - probably quite soon - become better at doctoring than many doctors are.

But that doesn't mean the tech will be rolled out to the public without a lot of drama, friction, mistakes, deaths, and traumatic change.

corndoge a day ago | parent | prev | next [-]

https://hippocraticai.com/

adriand a day ago | parent | prev [-]

This is a great idea and insurance companies as the customer is brilliant. I could see this extend to prescribing as well. There are huge numbers of people that would benefit from more readily prescribed drugs like GLP-1s, and these have large portential to decrease chronic disease.

girvo a day ago | parent [-]

> I could see this extend to prescribing as well.

The western world is already solving this, but not through letting LLMs prescribe (because that's a non-starter for liability reasons).

Instead, nurses and allied health professionals are getting prescribing rights in their fields (under doctors, but still it scales much better).

bsder a day ago | parent | prev | next [-]

> 2) Ask an LLM, which gets you 80-90% of the way there.

The Internet was 80%-90% accurate to begin with.

Then the Internet became worth money. And suddenly that accuracy dropped like a stone.

There is no reason to believe that ML/AI isn't going to speedrun that process.

ponector a day ago | parent | prev | next [-]

>> LLMs don't try to scam you, don't try to fool you, don't look out for their own interests

LLMs don't try to scam/fool you, LLM providers do.

Remember how Grok bragged that Musk had the “potential to drink piss better than any human in history” and was the “ultimate throat goat,” whose “blowjob prowess edges out” Donald Trump’s. Grok also posited that Musk was more physically fit than LeBron James, and that he would have been a better recipient of the 2016 porn industry award than porn star Riley Reid.

etra0 a day ago | parent [-]

Completely off-topic but I just love how the pettiness of Musk was abused by twitter community.

I had a chuckle reading all of these.

eastbound 2 days ago | parent | prev | next [-]

Excellent way of putting it. Just a nitpick: People should look up in medical encyclopedias/research papers/libraries, not blogs. It requires the ability to find and summarize… which is exactly what AI is excellent at.

jay_kyburz a day ago | parent [-]

"Where There Is No Doctor" would be a good place to start. https://hesperian.org/

JackSlateur a day ago | parent | prev | next [-]

"Much more important also is that LLMs don't try to scam you, don't try to fool you, don't look out for their own interests"

This is so naive, especially since both google and openai openly confess to manipulate the data for their own agenda (ads but not only)

AI is a skilled liar

You can always pride yourself and playing with fire, but the more humble attitude would be to avoid it at all cost;

etra0 a day ago | parent | prev | next [-]

> 2) Ask an LLM, which gets you 80-90% of the way there.

Hallucinations and sycophancy are still an issue, 80-90% is being generous I think.

I know this is not issues of the LLM itself, but rather the implementation & companies behind them (since there are open models as well), but, what limits to LLMs to be enshittified by corp needs?

I've seen this very recently with Grok, people were asking trolley-like problems comparing Elon Musk to anything, and Grok very frequently chose Elon Musk most of the time because it is probably embedded in the system prompt or training [1].

[1] https://www.theguardian.com/technology/2025/nov/21/elon-musk...

bgwalter 2 days ago | parent | prev | next [-]

> Much more important also is that LLMs don't try to scam you, don't try to fool you, don't look out for their own interests.

They follow their corporations instead. Just look at the status-quoism of the free "Google AI" and the constant changes in Grok, where xAI is increasingly locking down Grok, perhaps to stay in line with EU regulations. But Grok is also increasingly pro-billionaire.

Copilot was completely locked down on anything political before the 2024 election.

They all scam you according to their training and system prompts. Have you seen the minute change in the system prompt that led to MechaHitler?

andrepd a day ago | parent | prev [-]

Two MAJOR issues with your argument.

> where every person can ask a doctor their questions 10 times a day and instantly get an accurate response.

Why in god's name would you need to ask a doctor 10 questions every day? How is this in any way germane to this issue?

In any first-world country you can get a GP appointment free of charge either on the day or with a few days' wait, depending on the urgency. Not to mention emergency care / 112 any time day or night if you really need it. This exists and has existed for decades in most vaguely social-democratic countries in the world (but not only those). So you can get professional help from someone, there's no (absurd) false choice between either "asking the stochastic platitude generator" and "going without healthcare".

But I know right, a functioning health system with the right funding, management, and incentives! So boring! Yawn yawn, not exciting. GP practices don't get trillions of dollars in VC money.

> Ask an LLM, which gets you 80-90% of the way there.

This is such a ridiculous misrepresentation of the current state of LLMs that I don't even know how to continue a conversation from here.

markdown a day ago | parent | next [-]

> In any first-world country you can get a GP appointment free of charge

Are you really under the assumption that this is a first-world perk?

andrepd a day ago | parent [-]

You're right, it's also true in many middle-income countries, like Brazil.

markdown 7 hours ago | parent [-]

And also true in "third world" countries.

andrepd a day ago | parent | prev [-]

I love that the next day, I open this post and it's simply downvoted with 0 counterpoint.