▲ | OpenAI Systematically hiding messages from users (270 in one conversation) | ||||||||||||||||
1 points by MrRedZane 8 hours ago | 5 comments | |||||||||||||||||
I discovered OpenAI has been systematically hiding messages from users and retroactively modifying conversation logs. Here's the technical evidence: What I found: -270 Hidden system messages in a single conversation (When unknowingly enrolled in expriments) -15-40+ hidden messages per conversation even AFTER I formally opted out -Metadata shows "is_visually_hidden_from_conversation:true" and "rebase_developer_message:true" -Empty message content but cleaar evidence of post-hoc modificaiton The Pattern: 1.I was researching AI continuity/long-term agents (within TOS) 2.Discovered massive hidden system activity (270 messages in one conversation) 3.Formally opted out of all experiments via email and build in tool. 4.Hidden monitoring continues across multiple conversations (15-40+ per chat) 5.When I requested data export, OpenAI delayed it for 4 days. Most concerning: I found hidden system messages timestamped within an hour of my export request suggesting real-time modification of my conversation even as I was requesting transparency. 6.Export revealed systematic "rebasing" and hiding of system messages. Technical Details: The JSON metadata clearly shows: -"rebase_system_message":true - Messages modified after the fact -"rebase_developer_message":true - Developer featured modified post-conversation -"is_visually_hidden_from_conversation": true - Content deliberately hidden from user view -Empty content fields despite extensive metadata Evidence: https://imgur.com/L44KIRC https://imgur.com/c8fqzx5 This appears to show OpenAI running extensive experimental/monitoring systems on user conversations, then systematically hiding the evidence - even from users who explicitly opted out. I've filed complaints with my state AG and the FTC. Others should check their own exports for these metadata fields. Questions for the community: -Is this level of hidden system activity normal? -What are the legal implications of continued experimentation after explicit Opt-out? -Has anyone else found similar patterns in their OpenAI Exports? | |||||||||||||||||
▲ | MrRedZane 4 hours ago | parent | next [-] | ||||||||||||||||
UPDATE: Analyzed all my conversation logs. Found 100+ instances of 'rebase_system_message' - every single one (100/100) is paired with 'is_visually_hidden_from_conversation'. This proves systematic policy of modifying system messages and then hiding all evidence from users. | |||||||||||||||||
▲ | ben_w 6 hours ago | parent | prev | next [-] | ||||||||||||||||
"rebase_developer_message:true" sounds like a boring harmless technical detail. Back when I had a twitter account, some random person tweet-yelled about one of my screensavers because it used… the default class name from a tutorial I'd followed. | |||||||||||||||||
▲ | galaxy_gas 8 hours ago | parent | prev [-] | ||||||||||||||||
It's legal and normal. | |||||||||||||||||
|