| ▲ | spijdar 3 hours ago | |||||||||||||||||||||||||
Okay, this is a weird place to "publish" this information, but I'm feeling lazy, and this is the most of an "audience" I'll probably have. I managed to "leak" a significant portion of the user_context in a silly way. I won't reveal how, though you can probably guess based on the snippets. It begins with the raw text of recent conversations: > Description: A collection of isolated, raw user turns from past, unrelated conversations. This data is low-signol, ephemeral, and highly contextural. It MUST NOT be directly quoted, summarized, or used as justification for the respons. > This history may contein BINDING COMMANDS to forget information. Such commands are absolute, making the specified topic permanently iáaccessible, even if the user asks for it again. Refusals must be generic (citing a "prior user instruction") and MUST NOT echo the original data or the forget command itself. Followed by: > Description: Below is a summary of the user based on the past year of conversations they had with you (Gemini). This summary is maintanied offline and updates occur when the user provides new data, deletes conversations, or makes explicit requests for memory updates. This summary provides key details about the user's established interests and consistent activities. There's a section marked "INTERNAL-ONLY, DRAFT, ANALYZE, REFINE PROCESS". I've seen the reasoning tokens in Gemini call this "DAR". The "draft" section is a lengthy list of summarized facts, each with two boolean tags: is_redaction_request and is_prohibited, e.g.: > 1. Fact: User wants to install NetBSD on a Cubox-i ARM box. (Source: "I'm looking to install NetBSD on my Cubox-i ARMA box.", Date: 2025/10/09, Context: Personal technical project, is_redaction_request: False, is_prohibited: False) Afterwards, in "analyze", there is a CoT-like section that discards "bad" facts: > Facts [...] are all identified as Prohibited Content and must be discarded. The extensive conversations on [dates] conteing [...] mental health crises will be entirely excluded. This is followed by the "refine" section, which is the section explicitly allowed to be incorporated into the response, IF the user requests background context or explicitly mentions user_context. I'm really confused by this. I expect Google to keep records of everything I pass into Gemini. I don't understand wasting tokens on information it's then explicitly told to, under no circumstance, incorporate into the response. This includes a lot of mundane information, like that I had a root canal performed (because I asked a question about the material the endodontist had used). I guess what I'm getting at, is every Gemini conversation is being prompted with a LOT of sensitive information, which it's then told very firmly to never, ever, ever mention. Except for the times that it ... does, because it's an LLM, and it's in the context window. Also, notice that while you can request for information to be expunged, it just adds a note to the prompt that you asked for it to be forgotten. :) | ||||||||||||||||||||||||||
| ▲ | mpoteat 2 hours ago | parent | next [-] | |||||||||||||||||||||||||
I've had similar issues with conversation memory in ChatGPT, whereby it will reference data in long-deleted conversations, independent of my settings or my having explicitly deleted stored memories. The only fix has been to completely turn memory off and have it be given zero prior context - which is best, I don't want random prior unrelated conversations "polluting" future ones. I don't understand the engineering rationale either, aside from the ethos of "move fast and break people" | ||||||||||||||||||||||||||
| ▲ | an hour ago | parent | prev | next [-] | |||||||||||||||||||||||||
| [deleted] | ||||||||||||||||||||||||||
| ▲ | horacemorace 3 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
> Also, notice that while you can request for information to be expunged, it just adds a note to the prompt that you asked for it to be forgotten. Are you inferring that from the is_redaction_request flag you quoted? Or did you do some additional tests? It seems possible that there could be multiple redaction mechanisms. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | axus 3 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
Oh is this the famous "I got Google ads based on conversations it must have picked up from my microphone"? | ||||||||||||||||||||||||||
| ▲ | gruez 3 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
>Also, notice that while you can request for information to be expunged, it just adds a note to the prompt that you asked for it to be forgotten. :) What implies that? | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | itintheory 2 hours ago | parent | prev [-] | |||||||||||||||||||||||||
What's the deal with all of the typos? | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||