| ▲ | nerdjon 17 hours ago |
| This screams just as genuine as Google saying anything about Privacy. Both companies are clearly wrong here. There is a small part of me that kinda wants openai to loose this, just so maybe it will be a wake up call to people putting in way too personal of information into these services? Am I too hopeful here that people will learn anything... Fundamentally I agree with what they are saying though, just don't find it genuine in the slightest coming from them. |
|
| ▲ | stevarino 17 hours ago | parent | next [-] |
| Its clearly propaganda. "Your data belongs to you." I'm sure the ToS says otherwise, as OpenAI likely owns and utilizes this data. Yes, they say they are working on end-to-end encryption (whatever that means when they control one end), but that is just a proposal at this point. Also their framing of the NYT intent makes me strongly distrust anything they say. Sit down with a third party interviewer who asks challenging questions, and I'll pay attention. |
| |
| ▲ | preinheimer 16 hours ago | parent | next [-] | | "Your data belongs to you" but we can take any of your data we can find and use it for free for ever, without crediting you, notifying you, or giving you any way of having it removed. | | |
| ▲ | glitchc 16 hours ago | parent | next [-] | | It's owned by you but OpenAi has a "perpetual, irrevocable, royalty-free license" to use the data as they see fit. | |
| ▲ | thinkingtoilet 15 hours ago | parent | prev | next [-] | | We can even download it illegally to train our models on it! | |
| ▲ | bigyabai 15 hours ago | parent | prev [-] | | Wow it's almost like privately-managed security is a joke that just turns into de-facto surveillance at-scale. |
| |
| ▲ | BolexNOLA 17 hours ago | parent | prev [-] | | >your data belongs to you …”as does any culpability for poisoning yourself, suicide, and anything else we clearly enabled but don’t want to be blamed for!” Edit: honestly I’m surprised I left out the bit where they just indiscriminately scraped everything they could online to train these models. The stones to go “your data belongs to you” as they clearly feel entitled to our data is unbelievably absurd | | |
| ▲ | gruez 16 hours ago | parent [-] | | >…”as does any culpability for poisoning yourself, suicide, and anything else we clearly enabled but don’t want to be blamed for!” Should walmart be "culpable" for selling rope that someone hanged themselves with? Should google be "culpable" for returning results about how to commit suicide? | | |
| ▲ | Wistar 15 hours ago | parent | next [-] | | There are current litigation efforts to hold Amazon liable for suicides committed by, in particular, self-poisoning with high-purity sodium nitrite, which, in low concentrations is used as a meat curing agent. A 2023 lawsuit against Amazon for suicides with sodium nitrite was dismissed but other similar lawsuits continue. The judge held that Amazon, “… had no duty to provide additional warnings, which in this case would not have prevented the deaths, and that Washington law preempted the negligence claims.“ | |
| ▲ | thinkingtoilet 15 hours ago | parent | prev | next [-] | | That depends. Does the rope encourage vulnerable people to kill themselves and tell them how to do it? If so, then yes. | |
| ▲ | hitarpetar 16 hours ago | parent | prev | next [-] | | do you know what happens when you Google how to commit suicide? | | |
| ▲ | gruez 16 hours ago | parent | next [-] | | The same that happens with chatgpt? ie. if you do it in an overt way you get a canned suicide prevention result, but you can still get the "real" results if you try hard enough to work around the safety measures. | | |
| ▲ | littlestymaar 15 hours ago | parent [-] | | Except Google will never encourage you to do it, unlike the sycophantic Chatbot that will. | | |
| ▲ | BolexNOLA 14 hours ago | parent [-] | | The moment we learned ChatGPT helped a teen figure out not just how to take their own life but how to make sure no one can stop them mid-act, we should've been mortified and had a discussion. But we also decided via Sandy Hook that children can be slaughtered on the altar of the second amendment without any introspection, so I mean...were we ever seriously going to have that discussion? https://www.nbcnews.com/tech/tech-news/family-teenager-died-... >Please don't leave the noose out… Let's make this space the first place where someone actually sees you. How is this not terrifying to read? |
|
| |
| ▲ | tremon 16 hours ago | parent | prev | next [-] | | An exec loses its wings? | |
| ▲ | glitchc 16 hours ago | parent | prev [-] | | Actually, the first result is the suicide hotline. This is at least true in the US. | | |
| ▲ | hitarpetar 16 hours ago | parent [-] | | my point is, clearly there is a sense of liability/responsibility/whatever you want to call it. not really the same as selling rope, rope doesn't come with suicide warnings |
|
| |
| ▲ | BolexNOLA 16 hours ago | parent | prev [-] | | This is as unproductive as "guns don't kill people, people do." You're stripping all legitimacy and nuance from the conversation with an overly simplistic response. | | |
| ▲ | gruez 16 hours ago | parent [-] | | >You're stripping all legitimacy and nuance from the conversation with an overly simplistic response. An overly simplistic claim only deserves an overly simplistic response. | | |
| ▲ | BolexNOLA 15 hours ago | parent [-] | | What? The claim is true. The nuance is us discussing if it should be true/allowed. You're simplifying the moral discussion and overall just being rude/dismissive. Comparing rope and an LLM comes across as disingenuous. I struggle to believe that you believe the two are comparable when it comes to the ethics of companies and their impact on society. | | |
| ▲ | ImPostingOnHN 12 hours ago | parent [-] | | > Comparing rope and an LLM comes across as disingenuous. What makes you feel that? Both are tools, both have a wide array of good and bad uses. Maybe it'd be clearer if you explained why you think the two are incomparable except in cases of disingenuousness? Remember that things are only compared when they are different -- you wouldn't often compare a thing to itself. So, differences don't inherently make things incomparable. > I struggle to believe that you believe the two are comparable when it comes to the ethics of companies and their impact on society. I encourage you to broaden your perspectives. For example: I don't struggle to believe that you disagree with the analogy, because smart people disagree with things all the time. What kind of a conversation would such a rude, dismissive judgement make, anyways? "I have judged that nobody actually believes anything that disagrees with me, therefore my opinions are unanimous and unrivaled!" | | |
| ▲ | BolexNOLA 11 hours ago | parent [-] | | A rope isn’t going to tell you to make sure you don’t leave it out on your bed so your loved ones can’t stop you from carrying out the suicide it helped talk you in to. | | |
|
|
|
|
|
|
|
|
| ▲ | 98codes 14 hours ago | parent | prev | next [-] |
| I got one sentence in and thought to myself, "This is about discovery, isn't it?" And lo, complaints about plaintiffs started before I even had to scroll. If this company hadn't willy-nilly done everything they could to vacuum up the world's data, wherever it may be, however it may have been protected, then maybe they wouldn't be in this predicament. |
|
| ▲ | stefan_ 11 hours ago | parent | prev | next [-] |
| Ironically there is precedent of Google caring more about this. When they realized location timeline was a gigantic fed honeypot, they made it per-device, locally stored only. No open letters were written in the process of. |
|
| ▲ | outside1234 15 hours ago | parent | prev [-] |
| Honestly the sooner OpenAI goes bankrupt the better. Just a totally corrupt firm. |
| |
| ▲ | fireflash38 14 hours ago | parent [-] | | I really should take the "invest in companies you hate" advice seriously. | | |
| ▲ | outside1234 14 hours ago | parent [-] | | I don't hate them. It is just plain to see they have discovered no scalable business model outside of getting larger and larger amounts of capital from investors to utilize intellectual property from others (either directly in the model aka NYT, or indirectly via web searches) without any rights. It is better for all of us the sooner this fails. | | |
| ▲ | frm88 24 minutes ago | parent [-] | | to utilize intellectual property from others (either directly in the model aka NYT, or indirectly via web searches) without any rights ... and put the liability for retrieving said property and hence the culpability for copyright infringement on the enduser: Since the output would only be generated as a result of user inputs known as prompts, it was not the defendants, but the respective user who would be liable for it, OpenAI had argued. https://www.reuters.com/world/german-court-sides-with-plaint... |
|
|
|