| ▲ | Traster 4 hours ago |
| I said in earlier reports about this, it's difficult to draw statistical comparisons with humans because there's so little data. Having said that, it is clear that this system just isn't ready and it's kind of wild that a couple of those crashes would've been easily preventable with parking sensors that come equipped as standard on almost every other car. In some spaces we still have rule of law - when xAI started doing the deepfake nude thing we kind of knew no one in the US would do anything but jurisdictions like the EU would. And they are now. It's happening slowly but it is happening. Here though, I just don't know if there's any institution in the US that is going to look at this for what it is - an unsafe system not ready for the road - and take action. |
|
| ▲ | parl_match 4 hours ago | parent | next [-] |
| > the deepfake nude thing the issue is that these tools are widely accessible, and at the federal level, the legal liability is on the person who posts it, not who hosts the tool. this was a mistake that will likely be corrected over the next six years due to the current regulatory environment (trump admin), there is no political will to tackle new laws. > I just don't know if there's any institution in the US that is going to look at this for what it is - an unsafe system not ready for the road - and take action. unlike deepfakes, there are extensive road safety laws and civil liability precedent. texas may be pushing tesla forward (maybe partially for ideological reasons), but it will be an extremely hard sell to get any of the major US cities to get on board with this. so, no, i don't think you will see robotaxis on the roads in blue states (or even most red states) any time soon. |
| |
| ▲ | zardo 4 hours ago | parent | next [-] | | > legal liability is on the person who posts it, not who hosts the tool. In the specific case of grok posting deepfake nudes on X. Doesn't X both create and post the deepfake? My understanding was, Bob replies in Alice's thread, "@grok make a nude photo of Alice" then grok replies in the thread with the fake photo. | | |
| ▲ | Retric 3 hours ago | parent [-] | | That specific action is still instigated by Bob. Where grok is at risk is not responding after they are notified of the issue. It’s trivial for grock to ban some keywords here and they aren’t, that’s a legal issue. | | |
| ▲ | zardo 3 hours ago | parent | next [-] | | Sure Bob is instigating the harassment, then X.com is actually doing the harassment. Or at least, that's the case plaintiff's attorneys are surely going to be arguing. | | |
| ▲ | InvertedRhodium 2 hours ago | parent [-] | | I don't see how it's fundamentally any different to mailing someone harassing messages or distressing objects. Sure, in this context the person who mails the item is the one instigating the harassment but it's the postal network that's facilitating it and actually performing the "last mile" of harassment. | | |
| ▲ | Retric 2 hours ago | parent | next [-] | | The very first time it happened X is likely off the hook. However notification plays a role here, there’s a bunch of things the post office does if someone tries to use them to do this regularly and you ask the post office to do something. The issue therefore is if people complain and then X does absolutely nothing while having a plethora of reasonable options to stop this harassment. https://faq.usps.com/s/article/What-Options-Do-I-Have-Regard... You may file PS Form 1500 at a local Post Office to prevent receipt of unwanted obscene materials in the mail or to stop receipt of "obscene" materials in the mail. The Post Office offers two programs to help you protect yourself (and your eligible minor children). | |
| ▲ | zardo 2 hours ago | parent | prev [-] | | The difference is the post office isn't writing the letter. | | |
|
| |
| ▲ | ImPostingOnHN 43 minutes ago | parent | prev [-] | | if grok never existed and X instead ran a black-box-implementation "press button receive CP" webapp, X would be legally culpable and liable each time a user pressed the button, for production plus distribution the same is true if the webapp has a blank "type what you want I'll make it for you" field and the user types "CP" and the webapp makes it. |
|
| |
| ▲ | hamdingers 2 hours ago | parent | prev | next [-] | | > so, no, i don't think you will see robotaxis on the roads in blue states Truly baffled by this genre of comment. "I don't think you will see <thing that is already verifiably happening> any time soon" is a pattern I'm seeing way more lately. Is this just denying reality to shape perception or is there something else going on? Are the current driverless operations after your knowledge cutoff? | | |
| ▲ | an hour ago | parent | next [-] | | [deleted] | |
| ▲ | 39 minutes ago | parent | prev | next [-] | | [deleted] | |
| ▲ | parl_match an hour ago | parent | prev [-] | | robotaxi is the name of the tesla unsupervised driving program (as stated in the title of this hn post) and if you live in a parallel reality where they're currently operating unsupervised in a blue state, or if texas finally flipped blue for you, let me know how's going for you out there! for the rest of us aligned to a single reality, robotaxis are currently only operating as robotaxis (unsupervised) in texas (and even that's dubious, considering the chase car sleight of hand). of course, if you want to continue to take a weasely and uncharitable interpretation of my post because i wasn't completely "on brand", you are free to. in which case, i will let you have the last word, because i have no interest in engaging in such by-omission dishonesty. | | |
| ▲ | dragonwriter an hour ago | parent [-] | | > robotaxi is the name of the tesla unsupervised driving program “robotaxi” is a generic term for (when the term was coined, hypothetical) self-driving taxicabs, that predates Tesla existing. “Tesla Robotaxi” is the brand-name of a (slightly more than merely hypothetical, today) Tesla service (for which a trademark was denied by the US PTO because of genericness). Tesla Robotaxi, where it operates, provides robotaxis, but most robotaxis operating today are not provided by Tesla Robotaxi. | | |
| ▲ | parl_match an hour ago | parent [-] | | > Tesla 'Robotaxi' adds 5 more crashes in Austin in a month – 4x worse than humans hm yes i can see where the confusion lies |
|
|
| |
| ▲ | BoredPositron 4 hours ago | parent | prev | next [-] | | Just because someone tells you to produce child pornography you don't have to do it just because you are able to. Other model providers don't have the problem... | | |
| ▲ | parl_match 4 hours ago | parent [-] | | that is an ethical and business problem, not entirely a legal problem (currently). hopefully, it will universally be a legal problem in the near future, though.
and frankly, anyone paying grok (regardless of their use of it) is contributing to the problem | | |
| ▲ | philistine 2 hours ago | parent | next [-] | | It is not ethical to wait for legal solutions and in the meantime just producing fake child pornography with your AI solution. Legal things are amoral, amoral things are legal. We have a duty to live morally, legal is only words in books. | | |
| ▲ | bluGill 2 hours ago | parent [-] | | I live morally. I assume you do - the vast vast majority of reading this comment will not ask AI to produce child porn. However a small minority will, which is why we have laws and police. |
| |
| ▲ | Gigachad 2 hours ago | parent | prev | next [-] | | If you have to wait for the government to tell you to stop producing CP before you stop, you are morally bankrupt. | |
| ▲ | BoredPositron 3 hours ago | parent | prev [-] | | It's only an ethics and business problem if the produced images are purely synthetic and in most jurisdictions even that is questionable. Grok produced child pornography of real children which is a legal problem. |
|
| |
| ▲ | TZubiri 4 hours ago | parent | prev [-] | | >and at the federal level, the legal liability is on the person who posts it, not who hosts the tool. this was a mistake that will likely be corrected over the next six years [citation needed] Historically hosts have always absolutely been responsible for the materials they host, see DMCA law, CSAM case law... | | |
| ▲ | parl_match 4 hours ago | parent [-] | | no offense but you completely misinterpreted what i wrote. i didnt say who hosts the materials, i said who hosts the tool. i didnt mention anything about the platform, which is a very relevant but separate party. if you think i said otherwise, please quote me, thank you. > Historically hosts have always absolutely been responsible for the materials they host, [citation needed] :) go read up on section 230. for example with dmca, liability arises if the host acts in bad faith, generates the infringing content itself, or fails to act on a takedown notice that is quite some distance from "always absolutely". in fact, it's the whole point of 230 | | |
| ▲ | bluGill an hour ago | parent [-] | | pedantically correct, but there is a good argument that if you host an AI tool that can easially be made to make child porn that no longer applies. a couple years ago when AI was new you could argue that you never thought anyone would use your tool to create child porn. However today it is clear some people are doing that and you need to prevent that. Note that I'm not asking for perfection. However if someone does manage to create child porn (or any of a number of currently unspecified things - the list is likely to grow over the next few years), you need to show that you have a lot of protections in place and they did something hard to bypass them. |
|
|
|
|
| ▲ | moralestapia 4 hours ago | parent | prev | next [-] |
| >it's difficult to draw statistical comparisons [...] because there's so little data That ain't true [1]. 1: https://en.wikipedia.org/wiki/Fisher%27s_exact_test |
|
| ▲ | SilverElfin 4 hours ago | parent | prev | next [-] |
| > it's kind of wild that a couple of those crashes would've been easily preventable with parking sensors that come equipped as standard on almost every other car Teslas are really cheaply made, inadequate cars by modern standards. The interiors are terrible and are barebones even compared to mainstream cars like a Toyota Corolla. And they lack parking sensors depending on the version you bought. I believe current models don’t come with a surround view camera either, which is almost standard on all cars at this point, and very useful in practice. I guess I am not surprised the Robotaxis are also barebones. |
|
| ▲ | dsf2d 4 hours ago | parent | prev [-] |
| Its not ever going to get ready. Getting this to a place where it is better than humans continuously is not equivalent to fixing bugs in the context of the production of software used on phones etc. When you are dealing with a dynamic uncontained environment it is much more difficult. |
| |
| ▲ | SpicyLemonZest 4 hours ago | parent [-] | | Waymo is in a place where it's better than humans continuously. If Tesla is not, that's on them, either because their engineers are not as good or because they're forced to follow Elon's camera-only mandate. | | |
| ▲ | bluGill an hour ago | parent | next [-] | | citation needed. Waymo says they are better, but it is really hard to find someone without a conflict of interest who we can believe has and understands the data. | | |
| ▲ | SpicyLemonZest an hour ago | parent [-] | | I reject the premise of your comment. If Tesla wants to convince people that Robotaxi is safe, it's on them to publish an analysis with comparative data and stop redacting the crash details that Waymo freely provides. Until they do, it's reasonable to follow the source article's simple math and unreasonable to declare that there's no way to be sure because there might be some unknown factor it's not accounting for. |
| |
| ▲ | moralestapia 4 hours ago | parent | prev | next [-] | | It's the camera-only mandate, and it's not Elon's but Karpathy's. Any engineering student can understand why LIDAR+Radar+RGB is better than just a single camera; and any person moderately aware of tech can realize that digital cameras are nowhere as good as the human eye. But yeah, he's a genius or something. | | |
| ▲ | epistasis 3 hours ago | parent | next [-] | | I have enjoyed Karpathy's educational materials over the years, but somehow missed that he was involved with Tesla to this degree. This was a very insightful comment from 9 years ago on the topic: > What this really reflects is that Tesla has painted itself into a corner. They've shipped vehicles with a weak sensor suite that's claimed to be sufficient to support self-driving, leaving the software for later. Tesla, unlike everybody else who's serious, doesn't have a LIDAR. > Now, it's "later", their software demos are about where Google was in 2010, and Tesla has a big problem. This is a really hard problem to do with cameras alone. Deep learning is useful, but it's not magic, and it's not strong AI. No wonder their head of automatic driving quit. Karpathy may bail in a few months, once he realizes he's joined a death march. > ... https://news.ycombinator.com/item?id=14600924 Karpathy left in 2022. Turns out that the commenter, Animats, is John Nagle! | |
| ▲ | cameldrv 4 hours ago | parent | prev | next [-] | | Digital cameras are much worse than the human eye, especially when it comes to dynamic range, but I don't think that's all that widely known actually. There are also better and worse digital cameras, and the ones on a Waymo are very good, and the ones on a Tesla aren't that great, and that makes a huge difference. Beyond even the cameras themselves, humans can move their head around, use sun visors, put on sunglasses, etc to deal with driving into the sun, but AVs don't have these capabilities yet. | | |
| ▲ | CydeWeys 4 hours ago | parent | next [-] | | > especially when it comes to dynamic range You can solve this by having multiple cameras for each vantage point, with different sensors and lenses that are optimized for different light levels. Tesla isn't doing this mind you, but with the use of multiple cameras, it should be easy enough to exceed the dynamic range of the human eye so long as you are auto-selecting whichever camera is getting you the correct exposure at any given point. | |
| ▲ | tzs 2 hours ago | parent | prev | next [-] | | Tesla claims that their cameras use "photon counting" and that this lets them see well in the dark, in fog, in heavy rain, and when facing bright lights like the sun. Photon counting is a real thing [1] but that's not what Tesla claims to be doing. I cannot tell if what they are doing is something actually effective that they should have called something other than "photon counting" or just the usual Musk exaggerations. Anyone here familiar with the relevant fields who can say which it is? Here's what they claim, as summarized by whatever it is Google uses for their "AI Overview". > Tesla photon counting is an advanced, raw-data approach to camera imaging for Autopilot and Full Self-Driving (FSD), where sensors detect and count individual light particles (photons) rather than processing aggregate image intensity. By removing traditional image processing filters and directly passing raw pixel data to neural networks, Tesla improves dynamic range, enabling better vision in low light and high-contrast scenarios. It says these are the key aspects: > Direct Data Processing: Instead of relying on image signal processors (ISPs) to create a human-friendly picture, Tesla feeds raw sensor data directly into the neural network, allowing the system to detect subtle light variations and near-IR (infrared) light. > Improved Dynamic Range: This approach allows the system to see in the dark exceptionally well by not losing information to standard image compression or exposure adjustments. > Increased Sensitivity: By operating at the single-photon level, the system achieves a higher signal-to-noise ratio, effectively "seeing in the dark". > Elimination of Exposure Limitations: The technique helps mitigate issues like sun glare, allowing for better visibility in extreme lighting conditions. > Neural Network Training: The raw, unfiltered data is used to train Tesla's neural networks, allowing for more robust, high-fidelity perception in complex, real-world driving environments. [1] https://en.wikipedia.org/wiki/Photon_counting | | |
| ▲ | iknowstuff 2 hours ago | parent [-] | | all the sensor has to do is keep count of how many times a pixel got hit by a photon in the span of e.g. 1/24th of a second (long exposure) and 1/10000th of a second (short exposure). Those two values per pixel yield an incredible dynamic range and can be fed straight into the neural net. |
| |
| ▲ | iknowstuff 2 hours ago | parent | prev [-] | | https://www.sony-semicon.com/files/62/pdf/p-15_IMX490.pdf The IMX490 has a dynamic range of 140dB when spitting out actual images. The neural net could easily be trained on multiexposure to account for both extremely low and extremely high light. They are not trying to create SDR images. Please lets stop with the dynamic range bullshit. Point your phone at the sun when you're blinded in your car next time. Or use night mode. Both see better than you. |
| |
| ▲ | xiphias2 4 hours ago | parent | prev [-] | | Using only cameras is a business decision, not tech decision: will camera + NN be good enough before LIDAR+Radar+RGB+NN can scale up. For me it looks like they will reach parity at about the same time, so camera only is not totally stupid. What's stupid is forcing robotaxi on the road before the technology is ready. | | |
| ▲ | wstrange 4 hours ago | parent | next [-] | | Clearly they have not reached parity, as evidenced by the crash rate of Tesla. It's far from clear that the current HW4 + sensor suite will ever be sufficient for L4. | |
| ▲ | moralestapia 4 hours ago | parent | prev [-] | | >reach parity at about the same time Nah, Waymo is much safer than Tesla today, while Tesla has way-mo* data to train on and much more compute capacity in their hands. They're in a dead end. Camera-only was a massive mistake. They'll never admit to that because there's now millions of cars out there that will be perceived as defective if they do. This is the decision that will sink Tesla to the ground, you'll see. But hail Karpathy, yeah. * Sorry, I couldn't resist. | | |
| ▲ | algo_trader 2 hours ago | parent [-] | | Was Karpathy "fired" from Tesla because he could not make camera only work ? Or did he "resign" since Elon insists on camera-only and Karpathy says i cant do it? |
|
|
| |
| ▲ | xiphias2 4 hours ago | parent | prev [-] | | It's clear that camera-only driving is getting better as we have better image understanding models every year. So there will be a point when camera based systems without lidars will get better than human drivers. Technology is just not there yet, and Elon is impatient. | | |
| ▲ | MBCook 3 hours ago | parent | next [-] | | Then stop deploying camera only systems until that time comes. Waymo could be working on camera only. I don’t know. But it’s not controlling the car. And until such a time they can prove with their data that it is just as safe, that seems like a very smart decision. Tesla is not taking such a cautious approach. And they’re doing it on public roads. That’s the problem. | |
| ▲ | sschueller 3 hours ago | parent | prev | next [-] | | Lidar and radar will also get better and having all possible sensors will always out perform camera only. | |
| ▲ | fwip 4 hours ago | parent | prev [-] | | > So there will be a point when camera based systems without lidars will get better than human drivers. No reason to assume that. A toddler that is increasing in walk speed every month will never be able to outrun a cheetah. | | |
| ▲ | shoo 3 hours ago | parent [-] | | in contrast, a toddler equipped with an ion thruster & a modest quantity of xeon propellant could achieve enough delta-v to attain cheetah-escape velocity, provided the initial trajectory during the first 31 hours of the mission was through a low-cheetah-density environment | | |
| ▲ | tzs an hour ago | parent [-] | | That initial trajectory also needs to go through a low air density environment. At normal air density near the surface of the Earth that ion thruster could only get a toddler up to ~10 km/h before the drag force from the air equals the thrust from the ion thruster. The only way that ion thruster might save the toddler is if it was used to blast the cheetah in the face. It would take a pretty long time to actually cause enough damage to force the cheetah to stop, but it might be annoying enough and/or unusual enough to get it to decide to leave. | | |
| ▲ | shoo an hour ago | parent [-] | | > low air density environment. At normal air density near the surface of the Earth that ion thruster could only get a toddler up to ~10 km/h agreed. this also provides an explanation for the otherwise surprising fact that prey animals in the savannah have never been observed to naturally evolve ion thrusters. |
|
|
|
|
|
|