| ▲ | parl_match 4 hours ago |
| > the deepfake nude thing the issue is that these tools are widely accessible, and at the federal level, the legal liability is on the person who posts it, not who hosts the tool. this was a mistake that will likely be corrected over the next six years due to the current regulatory environment (trump admin), there is no political will to tackle new laws. > I just don't know if there's any institution in the US that is going to look at this for what it is - an unsafe system not ready for the road - and take action. unlike deepfakes, there are extensive road safety laws and civil liability precedent. texas may be pushing tesla forward (maybe partially for ideological reasons), but it will be an extremely hard sell to get any of the major US cities to get on board with this. so, no, i don't think you will see robotaxis on the roads in blue states (or even most red states) any time soon. |
|
| ▲ | zardo 4 hours ago | parent | next [-] |
| > legal liability is on the person who posts it, not who hosts the tool. In the specific case of grok posting deepfake nudes on X. Doesn't X both create and post the deepfake? My understanding was, Bob replies in Alice's thread, "@grok make a nude photo of Alice" then grok replies in the thread with the fake photo. |
| |
| ▲ | Retric 3 hours ago | parent [-] | | That specific action is still instigated by Bob. Where grok is at risk is not responding after they are notified of the issue. It’s trivial for grock to ban some keywords here and they aren’t, that’s a legal issue. | | |
| ▲ | zardo 3 hours ago | parent | next [-] | | Sure Bob is instigating the harassment, then X.com is actually doing the harassment. Or at least, that's the case plaintiff's attorneys are surely going to be arguing. | | |
| ▲ | InvertedRhodium 2 hours ago | parent [-] | | I don't see how it's fundamentally any different to mailing someone harassing messages or distressing objects. Sure, in this context the person who mails the item is the one instigating the harassment but it's the postal network that's facilitating it and actually performing the "last mile" of harassment. | | |
| ▲ | Retric 2 hours ago | parent | next [-] | | The very first time it happened X is likely off the hook. However notification plays a role here, there’s a bunch of things the post office does if someone tries to use them to do this regularly and you ask the post office to do something. The issue therefore is if people complain and then X does absolutely nothing while having a plethora of reasonable options to stop this harassment. https://faq.usps.com/s/article/What-Options-Do-I-Have-Regard... You may file PS Form 1500 at a local Post Office to prevent receipt of unwanted obscene materials in the mail or to stop receipt of "obscene" materials in the mail. The Post Office offers two programs to help you protect yourself (and your eligible minor children). | |
| ▲ | zardo 2 hours ago | parent | prev [-] | | The difference is the post office isn't writing the letter. | | |
|
| |
| ▲ | ImPostingOnHN 41 minutes ago | parent | prev [-] | | if grok never existed and X instead ran a black-box-implementation "press button receive CP" webapp, X would be legally culpable and liable each time a user pressed the button, for production plus distribution the same is true if the webapp has a blank "type what you want I'll make it for you" field and the user types "CP" and the webapp makes it. |
|
|
|
| ▲ | hamdingers 2 hours ago | parent | prev | next [-] |
| > so, no, i don't think you will see robotaxis on the roads in blue states Truly baffled by this genre of comment. "I don't think you will see <thing that is already verifiably happening> any time soon" is a pattern I'm seeing way more lately. Is this just denying reality to shape perception or is there something else going on? Are the current driverless operations after your knowledge cutoff? |
| |
| ▲ | an hour ago | parent | next [-] | | [deleted] | |
| ▲ | 37 minutes ago | parent | prev | next [-] | | [deleted] | |
| ▲ | parl_match an hour ago | parent | prev [-] | | robotaxi is the name of the tesla unsupervised driving program (as stated in the title of this hn post) and if you live in a parallel reality where they're currently operating unsupervised in a blue state, or if texas finally flipped blue for you, let me know how's going for you out there! for the rest of us aligned to a single reality, robotaxis are currently only operating as robotaxis (unsupervised) in texas (and even that's dubious, considering the chase car sleight of hand). of course, if you want to continue to take a weasely and uncharitable interpretation of my post because i wasn't completely "on brand", you are free to. in which case, i will let you have the last word, because i have no interest in engaging in such by-omission dishonesty. | | |
| ▲ | dragonwriter an hour ago | parent [-] | | > robotaxi is the name of the tesla unsupervised driving program “robotaxi” is a generic term for (when the term was coined, hypothetical) self-driving taxicabs, that predates Tesla existing. “Tesla Robotaxi” is the brand-name of a (slightly more than merely hypothetical, today) Tesla service (for which a trademark was denied by the US PTO because of genericness). Tesla Robotaxi, where it operates, provides robotaxis, but most robotaxis operating today are not provided by Tesla Robotaxi. | | |
| ▲ | parl_match an hour ago | parent [-] | | > Tesla 'Robotaxi' adds 5 more crashes in Austin in a month – 4x worse than humans hm yes i can see where the confusion lies |
|
|
|
|
| ▲ | BoredPositron 4 hours ago | parent | prev | next [-] |
| Just because someone tells you to produce child pornography you don't have to do it just because you are able to. Other model providers don't have the problem... |
| |
| ▲ | parl_match 4 hours ago | parent [-] | | that is an ethical and business problem, not entirely a legal problem (currently). hopefully, it will universally be a legal problem in the near future, though.
and frankly, anyone paying grok (regardless of their use of it) is contributing to the problem | | |
| ▲ | philistine 2 hours ago | parent | next [-] | | It is not ethical to wait for legal solutions and in the meantime just producing fake child pornography with your AI solution. Legal things are amoral, amoral things are legal. We have a duty to live morally, legal is only words in books. | | |
| ▲ | bluGill 2 hours ago | parent [-] | | I live morally. I assume you do - the vast vast majority of reading this comment will not ask AI to produce child porn. However a small minority will, which is why we have laws and police. |
| |
| ▲ | Gigachad 2 hours ago | parent | prev | next [-] | | If you have to wait for the government to tell you to stop producing CP before you stop, you are morally bankrupt. | |
| ▲ | BoredPositron 3 hours ago | parent | prev [-] | | It's only an ethics and business problem if the produced images are purely synthetic and in most jurisdictions even that is questionable. Grok produced child pornography of real children which is a legal problem. |
|
|
|
| ▲ | TZubiri 4 hours ago | parent | prev [-] |
| >and at the federal level, the legal liability is on the person who posts it, not who hosts the tool. this was a mistake that will likely be corrected over the next six years [citation needed] Historically hosts have always absolutely been responsible for the materials they host, see DMCA law, CSAM case law... |
| |
| ▲ | parl_match 4 hours ago | parent [-] | | no offense but you completely misinterpreted what i wrote. i didnt say who hosts the materials, i said who hosts the tool. i didnt mention anything about the platform, which is a very relevant but separate party. if you think i said otherwise, please quote me, thank you. > Historically hosts have always absolutely been responsible for the materials they host, [citation needed] :) go read up on section 230. for example with dmca, liability arises if the host acts in bad faith, generates the infringing content itself, or fails to act on a takedown notice that is quite some distance from "always absolutely". in fact, it's the whole point of 230 | | |
| ▲ | bluGill an hour ago | parent [-] | | pedantically correct, but there is a good argument that if you host an AI tool that can easially be made to make child porn that no longer applies. a couple years ago when AI was new you could argue that you never thought anyone would use your tool to create child porn. However today it is clear some people are doing that and you need to prevent that. Note that I'm not asking for perfection. However if someone does manage to create child porn (or any of a number of currently unspecified things - the list is likely to grow over the next few years), you need to show that you have a lot of protections in place and they did something hard to bypass them. |
|
|