Remix.run Logo
delusional 11 hours ago

> It's not a coincidence that I train a model on healthcare regulations and it answers a question about healthcare regulations

If you train a model on only healthcare regulations it wont answer questions about healthcare regulation, it will produce text that looks like healthcare regulations.

mexicocitinluez 11 hours ago | parent [-]

And that's not a coincidence. That's not what the word "coincidence" means. It's a complete misunderstanding of how these tools works.

delusional 11 hours ago | parent [-]

I don't think you're the right person to make any claim of "complete misunderstanding" when you claim that training an LLM on regulations would produce a system capable of answering questions about that regulation.

anthonylevine 11 hours ago | parent [-]

> you claim that training an LLM on regulations would produce a system capable of answering questions about that regulation.

Huh? But it does do that? What do you think training an LLM entails?

Are you of the belief that an LLM trained on non-medical data would have the same statical chance of answering a medical question correctly?

we're at the "Redefining what words means in order to not have to admit I was wrong" stage of this argument