Remix.run Logo
dnw 2 hours ago

It is a little more than semantic search. Their value prop is curation of trusted medical sources and network effects--selling directly to doctors.

I believe frontier labs have no option but to go into verticals (because models are getting commoditized and capability overhang is real and hard to overcome at scale), however, they can only go into so many verticals.

simianwords 2 hours ago | parent [-]

> Their value prop is curation of trusted medical sources

Interesting. Why wouldn't an LLM based search provide the same thing? Just ask it to "use only trusted sources".

tacoooooooo 2 hours ago | parent | next [-]

They're building a moat with data. They're building their own datasets of trusted sources, using their own teams of physicians and researchers. They've got hundreds of thousands of physicians asking millions of questions everyday. None of the labs have this sort of data coming in or this sort of focus on such a valuable niche

simianwords an hour ago | parent [-]

> They're building their own datasets of trusted sources, using their own teams of physicians and researchers.

Oh so they are not just helping in search but also in curating data.

> They've got hundreds of thousands of physicians asking millions of questions everyday. None of the labs have this sort of data coming in or this sort of focus on such a valuable niche

I don't take this too seriously because lots of physicians use ChatGPT already.

some_random 44 minutes ago | parent [-]

Lots of physicians use ChatGPT but so do lots of non-physicians and I suspect there's some value in knowing which are which

otikik 40 minutes ago | parent | prev | next [-]

I don't think you can use an LLM for that. For the same reason you can't just ask it to "Make the app secure and fast"

simianwords 31 minutes ago | parent [-]

This is completely incorrect. This is exactly what LLMs can do better.

palmotea an hour ago | parent | prev [-]

> Why wouldn't an LLM based search provide the same thing? Just ask it to "use only trusted sources".

Is that sarcasm?

simianwords an hour ago | parent [-]

why?

Rygian an hour ago | parent [-]

How does the LLM know which sources can be trusted?

simianwords an hour ago | parent [-]

yeah it can avoid blogspam as sources and prioritise research from more prestigious journals or more citations. it will be smart enough to use some proxy.

palmotea 38 minutes ago | parent | next [-]

You can also tell it to just not hallucinate, right? Problem solved.

I think what you'll end up is a response that still relies on whatever random sources it likes, but it'll just attribute it to the "trusted sources" you asked for.

simianwords 20 minutes ago | parent [-]

you have an outdated view on how much it hallucinates.

palmotea 3 minutes ago | parent [-]

The point was: will telling it to not hallucinate make it stop hallucinating?

36 minutes ago | parent | prev [-]
[deleted]