| ▲ | cruffle_duffle 4 hours ago | |
That is why you have to always have it ground itself in something. Have it search for relevant research or professional whatever and pull that into context. Otherwise it’s just your word plus its training data. I had to deal with a close family friend going through alcohol withdrawal and getting checked in at a recovery clinic for detox and used Claude heavily. The first thing I had it do as do that “deep research” around the topic of alcohol addiction, withdrawal, etc… and then made that a project document along with clear guidelines about how it shouldn’t make inferences beyond what it in its context and supporting docs. We also spent a whole session crafting a good set of instructions (making sure it was using Anthropics own guidelines for its model…) Little differences in prompts make a huge deal in the output. I dunno. It is possible to use these models for dumping crazy shit you are going through. But don’t kid yourself about their output and aggressively find ways to stomp out things it has no real way to authoritatively say. | ||