| ▲ | wps 14 hours ago | |||||||
I see what you mean, but I like having a clean slate even for those one off questions. I don’t want a differing answer to a philosophical inquiry just because the LLM remembers a prior position I’ve written about you know? | ||||||||
| ▲ | Retr0id 10 hours ago | parent | next [-] | |||||||
I have all the history settings off for this reason, but something that worries me is that there's a fair bit of information about me trained right into the model weights. I'm not "famous" by any stretch but claude has awareness of some of my HN-front-page-hitting projects, etc., which I think should be enough to bias responses (although I haven't tried to measure it). I set my name to "User" in the settings, so in a clean-slate chat it has nothing to go on, but the moment claude code does something like `git log` it knows who I am again. I've even considered writing some kind of redaction proxy. | ||||||||
| ▲ | e1g 14 hours ago | parent | prev [-] | |||||||
FWIW, both OpenAI and Anthropic have a toggle to do a “Temporary/Incognito Chat” that does not use or update memory. I too wish this was the default, and then you could opt in at the end of the chat to save some long term aspects into memory. | ||||||||
| ||||||||