Remix.run Logo
zehaeva 3 days ago

What if you had told it again that you don't think that's right? Would it have stuck to it's guns and went "oh, no, I am right here" or would it have backed down and said "Oh, silly me, you're right, here's the real dosage!" and give you again something wrong?

I do agree that to get the full usage out of an LLM you should have some familiarity with what you're asking about. If you didn't already have a sense of what a dosage is already, why wouldn't 100mcg be the right one?

cj 3 days ago | parent [-]

I replied in the same thread "Are you sure that sounds like a low dose". It stuck to the (correct) recommendation in the 2nd response, but added in a few use cases for higher doses. So seems like it stuck to its guns for the most part.

For things like this, it would definitely be better for it to act more like a search engine and direct me to trustworthy sources for the information rather than try to provide the information directly.

stevedotcarter 2 days ago | parent [-]

I noticed this recently when I saw someone post with an AI generated map of Europe which was all wrong. I tried the same and asked ChatGPT to generate a map of Ireland and it was wrong too. So then I asked to find me some accurate maps of Ireland and instead of generating it gave me images and links to proper websites.

Will definitely be remembering to put "generate" vs "find" in my prompts depending on what I'm looking for. Not quite sure how you would train the model to know which answer is more suitable.