| ▲ | 1vuio0pswjnm7 13 hours ago |
| NB. There is no order to "collect". The order is to preserve what is already being collected and stored in the ordinary course of business https://ia801404.us.archive.org/31/items/gov.uscourts.nysd.6... https://ia801404.us.archive.org/31/items/gov.uscourts.nysd.6... |
|
| ▲ | 1vuio0pswjnm7 9 hours ago | parent | next [-] |
| Why does OpenAI collect and retain for 30 days^1 chats that the user wants to be deleted It was doing this prior to being sued by the NYT and many others OpenAI was collecting chats even when the user asked for
deletion, i.e., the user did not want them saved That's why a lawsuit could require OpenAi to issue a hold order, retain these chats for longer and produce them to another party in discovery If OpenAI was not collecting these chats in the ordinary course of its business before being sued by the NYT and many others, then there would be no "deleted chats" for OpenAI to be compelled by court order to retain and produce to the plaintiffs 1. Or whatever period OpenAI decides on. It could change at any time for any reason. However OpenAI cannot change their retention policy to some shortened period after being sued. Google tried this a few years ago. It began destroying chats between employees after Google was on notice it was going to be sued by the US government and state AGs |
| |
| ▲ | jacquesm 4 hours ago | parent | next [-] | | I'd trust Sam Altman about as far as I could throw him and there is absolutely no way OpenAI should be having sensitive private conversations with anybody. Sooner or later all that data will end up with Microsoft who can then correlate it with a ton of data they already have from other sources (windows, office online, linkedin, various communications services including 'teams', github and so on). This is an intelligence service's wet dream. | |
| ▲ | FloorEgg 8 hours ago | parent | prev | next [-] | | I'm not commenting on the core point of your comment, only the "why retain for 30 days" question. Im an age of automated backups and failovers, deleting can be really hard. Part of the answer could simply be that syncing a delete across all the redundancies (while ensuring those redundancies are reliable when a disaster happens and they need to recover or maintain uptime) may take days to weeks. Also the 30 days could be the limit, as oppose to the average or median time it takes. | | |
| ▲ | chasd00 7 hours ago | parent | next [-] | | The most likely explanation is whatever storage solution they’re using has a built in “recycle bin” functionality and deleted data stays the for 30 days before it’s actually deleted. I see this a lot in very large databases. The recycle bin functionality is built in to the data store product. | | | |
| ▲ | chemotaxis 4 hours ago | parent | prev | next [-] | | > I'm not commenting on the core point of your comment, only the "why retain for 30 days" question. Im an age of automated backups and failovers, deleting can be really hard. I doubt it's that. Deletion is hard, but it's not "exactly 30 days" hard. The most likely explanation is that OpenAI wants the ability to investigate abuse and / or publicly-made claims ("ChatGPT told my underage kid to <x>!" / "ChatGPT praised Hitler!"). If they delete chats right away, they're flying blind and you can claim anything you want. Now, whether you should have a "delete" button that doesn't really delete stuff is another question. | |
| ▲ | dylan604 7 hours ago | parent | prev [-] | | What is the standard way of being forced to restore from backup while ensuring deleted data does not also become restored? Is every delete request stored so that it can be replayed against any restore? | | |
| ▲ | an hour ago | parent | next [-] | | [deleted] | |
| ▲ | FloorEgg 7 hours ago | parent | prev [-] | | I have only had to manage this in a startup context with relatively low stakes and it was hard and messy. I don't know what best practice is at the scale that openai operates, but from my limited experience I have an intuition that the challenge is not trivial. Also I suspect there is a big gap between best practice and common practice. My guess is common practice is dysfunctional. I would also suspect there is no standard way, but there are established practices within different technology stacks that vary between performative, barely compliant and effective at scale. In one case I saw there was a substantial manual effort to load snapshots into instances run the delete and then save new snapshots. This was over 10 years ago though and it was more of a "we just need to get this done" than a "what's the most elegant way to do this at scale" |
|
| |
| ▲ | terminalshort 8 hours ago | parent | prev [-] | | Maybe an append only data store where actual hard deletes only happen as an async batch job? Still 30 days seems really long for this. |
|
|
| ▲ | Aurornis 12 hours ago | parent | prev | next [-] |
| The two documents you linked are responses to specific parts of OpenAI's objection. They're not good sources for the original order. Nevertheless, you're generally correct but you don't realize why: A core feature of ChatGPT is that it keeps your conversation history right there so you can click on it, review it, and continue conversations across all of your devices. The court order is to preserve what is already present in the system even if the user asks to delete it. For those who are confused: A core feature of ChatGPT and other LLM accounts is that your past conversations are available to return to, until you specifically delete them. The problem now is that if a user asks for the conversation to be deleted, OpenAI has to retain the conversation for the court order even though it appears deleted. |
|
| ▲ | 12 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | 10 hours ago | parent | prev [-] |
| [deleted] |