| ▲ | kouteiheika 3 days ago |
| > We’re building an age-prediction system to estimate age based on how people use ChatGPT. If there is doubt, we’ll play it safe and default to the under-18 experience. In some cases or countries we may also ask for an ID Yay, more unreliable AI that will misclassify users, either letting children access content they shouldn't, or ban adults until they give up their privacy and give their ID to the Big Brother. > we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm Oh, even better, so if the AI misclassifies me it will automatically call the cops on me? And how long before this is expanded to other forms of wrongthink? Sure, let's normalize these kinds of systems where authorities are notified about what you're doing privately, definitely not a slippery slope that won't get people in power salivating about the new possibilities given by such a system. > “Treat our adult users like adults” is how we talk about this internally Suuure, maybe I would have believed it if ChatGPT wasn't so ridiculously censored already; this sounds like post-hoc rationalization to cover their asses and not something that they've always believed in. Their models were always incredibly patronizing and censored. One fun anecdote I have: I still remember the day when I first got access to DALL-E and asked it to generate me an image in "soviet style", and got my request blocked and a big fat warning threatening me with a ban because apparently "soviet" is a naughty word. They always erred very strongly on the side of heavy-handed filtering and censorship; even their most recently released gpt-oss model has become a meme in the local LLM community due to how often it refuses. |
|
| ▲ | mhuffman 3 days ago | parent | next [-] |
| >Yay, more unreliable AI that will misclassify users, either letting children access content they shouldn't, or ban adults until they give up their privacy and give their ID to the Big Brother. Or maybe, deep in the terms and conditions, it will add you to Altman's shitcoin company[0] [0]https://en.wikipedia.org/wiki/World_(blockchain) |
| |
| ▲ | bn-l 3 days ago | parent [-] | | Never forget about world coin when thinking about Altman and what he will do with power. | | |
| ▲ | mhuffman 2 days ago | parent [-] | | I am positive that is final intent with world coin is a global ID system that he can market to governments and businesses. We have seen what he is about and he is not a person that needs to have that type of business. |
|
|
|
| ▲ | tempodox 3 days ago | parent | prev | next [-] |
| Just seeing those words, “safety”, “freedom”, “privacy”, being used by a company like OpenAI already rang every available alarm bell for me, and their announcement indeed fulfills every expectation of bad. They really are experts in making the world a worse place. |
|
| ▲ | chris_wot 3 days ago | parent | prev | next [-] |
| Gotta love the "if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm." Oh brilliant. The same authorities around the world that regularly injure or kill the mentally ill? Or parents that might be abusing their child? What a wonderful initiative! |
| |
| ▲ | Eddy_Viscosity2 3 days ago | parent | next [-] | | Swatting by AI. The future is amazing. | |
| ▲ | citizenpaul 3 days ago | parent | prev [-] | | Even if they could do this (they can't) they won't. Its just a scare tactic to start getting users to show ID so openAI can become the defacto data broker company. |
|
|
| ▲ | mcdeltat 3 days ago | parent | prev | next [-] |
| Maybe we don't have to worry about AI chatbots taking over because they will end up so censored/policed that no one will/can use them. Can't use AI if you're too young, too old, have any medical issues, have the wrong political beliefs, are religious, live in the wrong country, etc etc. (By "can't use" I mean either you're explicitly banned, or the chance of being reported to the authorities is so high that no one risks it.) |
|
| ▲ | lawn 3 days ago | parent | prev | next [-] |
| > Oh, even better, so if the AI misclassifies me it will automatically call the cops on me? How long will it take for someone to accidentally SWAT themselves? |
|
| ▲ | citizenpaul 3 days ago | parent | prev | next [-] |
| I can't wait to hear Ed zittrons rant on this. OpenAI just showed their hand. They have no path to profitability so they are going to the data broker well lol. |
|
| ▲ | vmg12 3 days ago | parent | prev | next [-] |
| If you were honest in your critique the people you should be criticizing are the "think of the children" types, many of which also use hackernews (see https://news.ycombinator.com/item?id=45026886). There is immense societal pressure to de-anonymize the internet, I find the arguments from both sides compelling (for the deanonymization part I think it's compelling for at least parts of the internet). |
| |
| ▲ | astrobe_ 3 days ago | parent | next [-] | | If we want to protect kids/teens, why not create an "Internet for kids" with a specific TLD, and the owner of this TLD would only accept sites that adhere to specific guidelines (moderation, no adult content, advertisement...)? Then devices could have a one-button config that restricts it to that TLD. | | |
| ▲ | vmg12 3 days ago | parent | next [-] | | I'm not suggesting solutions to any of these things, I'm also not one of the "think of the kids" people. | |
| ▲ | dizlexic 3 days ago | parent | prev [-] | | Why have I never heard this idea, you're a genius. Can we ship this next week? This current approach is a net negative, but the TLD idea actually makes sense to me. | | |
| ▲ | thfuran 3 days ago | parent [-] | | And as long as kids don't know about DNS, it might even work. | | |
| ▲ | astrobe_ 12 hours ago | parent | next [-] | | Well, in this scenario the user isn't supposed to have access to the (DNS) configuration. But one could still enter a raw IP address in the browser - e.g. a friend who has an unlocked device could ping the site to get it. But if one accesses a website by IP, since the links and the Ajax often need DNS resolution (CDNs etc.), the content will probably be blocked for the most part. Like copy protection, the scheme is probably not entirely waterproof, but it can nonetheless act as a deterrent. | |
| ▲ | dizlexic 3 days ago | parent | prev [-] | | Meh can be implemented at many different levels. |
|
|
| |
| ▲ | fkyoureadthedoc 3 days ago | parent | prev [-] | | Who cares. Deanonymize it. Ruin the whole thing. Fuck social media, it sucks ass. Sooner you do it, the sooner we can move on to our local mesh network cyber punk future. | | |
| ▲ | citizenpaul 3 days ago | parent | next [-] | | Yep the only way out is though the bottom now. Let's do this. Contact your senator and think of the children. | |
| ▲ | 3 days ago | parent | prev [-] | | [deleted] |
|
|
|
| ▲ | arccy 3 days ago | parent | prev [-] |
| is it privately when you're interacting with someone else's systems? |
| |
| ▲ | kouteiheika 3 days ago | parent [-] | | I don't see how that's relevant. When I'm making a phone call I'm also interacting with hundreds of systems that are not mine; do I not have the right to keep my conversation private? Even the blog post here says that "It is extremely important to us, and to society, that the right to privacy in the use of AI is protected. People talk to AI about increasingly personal things", and that's one of the few parts that I actually agree with. | | |
| ▲ | IncreasePosts 3 days ago | parent [-] | | You're interacting with hundreds of systems whose job it is to simply transit your information. Privacy there makes sense. However, you're also talking to someone on the other end of all those systems. Do you have a right to force the other person to keep your conversation private? | | |
| ▲ | kouteiheika 3 days ago | parent | next [-] | | An AI chatbot is not a person, and you're not talking to anyone; you're querying a (fancy) automated system. I fundamentally disagree that those queries should not be guaranteed private. Here's a thought experiment: you're a gay person living in a country where being gay is illegal and results in a death penalty. You use ChatGPT in a way which makes your sexuality apparent; should OpenAI be allowed to share this query with anyone? Should they be allowed to store it? What if it inadvertently leaks (which has happened before!), or their database gets hacked and dumped, and now the morality police of your country are combing through it looking for criminals like you? Privacy is a fundamental right of every human being; I will gladly die on this hill. | | |
| ▲ | nine_k 3 days ago | parent [-] | | If you are talking to a remote entity not controlled by you, you should assume that your communication is somehow accessible to whoever has internal access that other entity. That as well may be not the entity's legitimate owners, but law-breakers or law enforcement. So, no, not private by default, but only by goodwill and coincidence. There's a reason why e.g. banks want to have all critical systems on premises, under their physical control. | | |
| ▲ | yndoendo 3 days ago | parent | next [-] | | How would consuming static information from a book versus a dynamic system that is book-esk be any different? You are using ML to help quickly categorize and assimilate information that spans multiple books, magazines, or other written medium. [0] [1] Why do people speak of ML/AI as an entity when it is a tool like a microwave oven? It is a tool designed to give answers, even wrong ones when the question is nonsensical. [0] https://www.ala.org/advocacy/advleg/federallegislation/theus... [1] https://www.ala.org/advocacy/intfreedom/statementspols/other... | | |
| ▲ | nine_k 3 days ago | parent [-] | | The difference is simple: if there's another party present while you're doing this. If yes, assume that the other party has access to the information that passed through it. A librarian would know which books you asked for. A reading assistant would know what you wanted to be read or summarized. Your microwave might have an idea what are you going to eat, if you run the "sensor heating" program. The consumption is "static" in your terms if you read a paper book alone, or if you access a publicly available web page without running any scripts, or sending any cookies. | | |
| ▲ | yndoendo 3 days ago | parent [-] | | Sorry, there is always a 3rd party involved in a library. The librarians are the ones that select which books to have on handle for consumption, same with a book store, or any source provider of books. A person going to a library and consuming with out a check-out record, one must assume any book was consumed with in the collection. Only a solid record of a book be checked out creates a defined moment that is still anchored in confidentiality between the parties. Unless that microwave sensor requires an external communication it is a closed system which does not communicate any information about what item was heated. The 3rd party would be the company the meal was purchased from. A well designed _smart microwave_ would perform batch process updating and pull in a collection of information to create the automated means to operate. Never know when there could be an Internet outage or the tool might be placed were external communication is not a possible option. A poorly designed system would require a back and forth communication. Yet it would be no different than a chief knowing what you order with limited information about you. Those systems have an inherent anonymity. It is the processing record that can be exploited and a good organization would require a warrant or purge the information when it is no longer needed. Cash payment also improves the anonymity in that style of system preventing leaking personal information to anyone. Why should a static book system like a library not be applied to any ML model since they are performing the same task and providing access to information in a collection? The system is poorly designed if confidently is not adhered by all parties. Sounds like ML corporations want to make you the product instead of being used as a product. This is why I only respect open design models, from bottom up, that are run locally. |
|
| |
| ▲ | kouteiheika 3 days ago | parent | prev | next [-] | | I am assuming that my communications are not private, but it doesn't change the fact that these companies should be held to a higher standard than that and those rights should be codified into the law. | |
| ▲ | BriggyDwiggs42 3 days ago | parent | prev [-] | | That’s a rational and cautious assumption but there should also be regulations that render it less necessary placed upon companies large enough to shoulder the burden. | | |
| ▲ | nine_k 3 days ago | parent [-] | | The bodies that are in a position to effect such regulations are also the bodies that are interested in looking at your (yes, your) private communication. No, formally being a liberal democracy helps little, see PATRIOT Act, Chat Control, etc. The only secure position for a company (provided that the company is not interested in reading your communication) is the position of a blind carrier that cannot decrypt what you say; e.g. Mullvad VPN demonstrated that it works. I don't think that an LLM hosting company can use such an approach, so... | | |
|
|
| |
| ▲ | gspencley 3 days ago | parent | prev | next [-] | | > Do you have a right to force the other person to keep your conversation private? It depends. If you're speaking to a doctor or a lawyer, yes, by law they are bound to keep your conversation strictly confidential except in some very narrow circumstances. But it goes beyond those two examples. If I have an NDA with the person I am speaking with on the other end of the line, yes I have the "right" to "force" the other person to keep our conversation private given that we have a contractual agreement to do so. As far as OpenAI goes, I'm of the opinion that OpenAI - as well as most other businesses - have the right to set the terms by which they sell or offer services to the public. That means if they wanted a policy of "all chats are public" that would be within their right to impose as far as I'm concerned. It's their creation. Their business. I don't believe people are entitled to dictate terms to them, legal restrictions notwithstanding. But in so far as they promise that chats are private, that becomes a contract at the time of transaction. If you give them money (consideration) with the impression that your chats with their LLM are private because they communicated that, then they are now contractually bound to honour the terms of that transaction. The terms that they subjected themselves to when either advertising their services or in the form of a EULA and/or TOS presented at the time of transaction. | |
| ▲ | sophacles 3 days ago | parent | prev | next [-] | | In many circumstances yes. When I'm talking to my doctor, or lawyer, or bank. When there's a signed NDA. And so on. There are circumstances where the other person can be (and is) obliged to maintain privacy. One of those is interacting with an AI system where the terms of service guarantee privacy. | | |
| ▲ | IncreasePosts 3 days ago | parent [-] | | Yes, but there are also times when other factors are more important than privacy. If you tell your doctor you're going to go home and kill your wife, they are ethically bound to report you to the police, despite your right of doctor patient confidentiality. Which is similar to what openai says here about "imminent harm" |
| |
| ▲ | citizenpaul 3 days ago | parent | prev [-] | | > Do you have a right to force the other person to keep your conversation private? In most of the USA that already is the law. |
|
|
|