| ▲ | jwpapi 3 hours ago | |
Could this be just memory? Not clear it actually isn’t | ||
| ▲ | afro88 2 hours ago | parent | next [-] | |
It's not, but the author did say they have used this test against models when they come out. So it's possible that put the unpublished text into the training data for the next model, somehow linked back to the author's identity | ||
| ▲ | jwolfe 3 hours ago | parent | prev | next [-] | |
The comments on the article include other people replicating all or parts of the finding. I'm also pretty confident Kelsey Piper wouldn't fail to disable memory while simultaneously talking about how Claude incognito mode is insufficient to prevent the app from handing it your name. | ||
| ▲ | gs17 3 hours ago | parent | prev | next [-] | |
They mention running it through the API as well. | ||
| ▲ | michaelchisari 2 hours ago | parent | prev [-] | |
"I did not have memory enabled, nor did I have information about me associated with my account; I did these tests in Incognito Mode. To make sure it wasn’t somehow feeding my account information to Claude even in Incognito Mode, I asked a friend to run these tests on his computer, and he received the same result; I also got the same result when I tested it through the API." Given those precautions if it is just memory or some form of deanonymization that's also cause for concern. | ||