| ▲ | keeda 5 days ago |
| I strongly believe LIDAR is the way to go and that Elon's vision-only move was extremely "short-sighted" (heheh). There are many reasons but that drives it home for me multiple times a week is that my Tesla's wipers will randomly sweep the windshield for absolutely no reason. This is because the vision system thinks there is something obstructing its view when in reality it is usually bright sunlight -- and sometimes, absolutely nothing that I can see. The wipers are, of course, the most harmless way this goes wrong. The more dangerous type is when it phantom-brakes at highway speeds with no warning on a clear road and a clear day. I've had multiple other scary incidents of different types (swerving back and forth at exits is a fun one), but phantom braking is the one that happens quasi-regularly. Twice when another car was right behind me. As an engineer, this tells me volumes about what's going on in the computer vision system, and it's pretty scary. Basically, the system detects patterns that are inferred as its vision being obstructed, and so it is programmed to brush away some (non-existent) debris. Like, it thinks there could be a physical object where there is none. If this was an LLM you would call it a hallucination. But if it's hallucinating crud on a windshield, it can also hallucinate objects on the road. And it could be doing it every so often! So maybe there are filters to disregard unlikely objects as irrelevant, which act as guardrails against random braking. And those filters are pretty damn good -- I mean, the technology is impressive -- but they can probabistically fail, resulting in things that we've already seen, such as phantom-braking, or worse, driving through actual things. This raises so many questions: What other things is it hallucinating? And how many hardcoded guardrails are in place against these edge cases? And what else can it hallucinate against which there are no guardrails yet? And why not just use LIDAR that can literally see around corners in 3D? |
|
| ▲ | jqpabc123 5 days ago | parent | next [-] |
| Engineering reliability is primarily achieved through redundancy. There is none with Musk's "vision only" approach. Vision can fail for a multitude of reasons --- sunlight, rain, darkness, bad road markers, even glare from a dirty windshield. And when it fails, there is no backup plan -- the car is effectively driving blind. Driving is a dynamic activity that involves a lot more than just vision. Safe automated driving can use all the help it can get. |
| |
| ▲ | Someone1234 5 days ago | parent | next [-] | | I agree with everything you're saying; but even outside of Tesla, I'd just like to remind people that LIDAR as a complement to vision isn't at all straightforward. Sensor fusion adds real complexity in calibration, time sync, and modeling. Both LIDAR and vision have edge cases where they fail. So you ideally want both, but then the challenge is reconciling disagreements with calibrated, and probabilistic fusion. People seem to be under the mistaken impression that vision is dirty input and LIDAR is somehow clean, when in reality both are noisy inputs with different strengths and weaknesses. I guess my point is: Yes, 100% bring in LIDAR, I believe the future is LIDAR + vision. But when you do that, early iterations can regress significantly from vision-only until the fusion is tuned and calibration is tight, because you have to resolve contradictory data. Ultimately the payoff is higher robustness in exchange for more R&D and development workload (i.e. more cost). The same reason why Tesla needed vision-only to work (cost & timeline) is the same reason why vision+LIDAR is so challenging. | | |
| ▲ | ethbr1 5 days ago | parent | next [-] | | The primary benefit of multiple sensor fusion from a safety standpoint isn't an absolute decrease in errors. It's the ability to detect sensor disagreements at all. With single modality sensors, you have no way of truly detecting failures in that modality, other than hacks like time-series normalizing (aka expected scenarios). If multiple sensor modalities disagree, even without sensor fusion, you can at least assume something might be awry and drop into a maximum safety operation mode. But we'd think that the budget config of the Boeing 737 MAX would have taught us that tying safety critical systems to single sources of truth is a bad idea... (in that case, critical modality / single physical sensor) | | |
| ▲ | AnIrishDuck 5 days ago | parent | next [-] | | > With single modality sensors, you have no way of truly detecting failures in that modality, other than hacks like time-series normalizing (aka expected scenarios). "A man with a watch always knows what time it is. If he gains another, he is never sure" Most safety critical systems actually need at least three redundant sensors. Two is kinda useless: if they disagree, which is right? EDIT: > If multiple sensor modalities disagree, even without sensor fusion, you can at least assume something might be awry and drop into a maximum safety operation mode. This is not always possible. You're on a two lane road. Your vision system tells you there's a pedestrian in your lane. Your LIDAR says the pedestrian is actually in the other lane. There's enough time for a lane change, but not to stop. What do you do? | | |
| ▲ | esafak 5 days ago | parent | next [-] | | > Two is kinda useless: if they disagree, which is right? They don't work by merely taking a straw poll. They effectively build the joint probability distribution, which improves accuracy with any number of sensors, including two. > You're on a two lane road. Your vision system tells you there's a pedestrian in your lane. Your LIDAR says the pedestrian is actually in the other lane. There's enough time for a lane change, but not to stop. Any realistic system would see them long before your eyes do. If you are so worried, override the AI in the moment. | | |
| ▲ | AnIrishDuck 5 days ago | parent | next [-] | | > They don't work by merely taking a straw poll. They effectively build the joint probability distribution, which improves accuracy with any number of sensors, including two. Lots of safety critical systems actually do operate by "voting". The space shuttle control computers are one famous example [1], but there are plenty of others in aerospace. I have personally worked on a few such systems. It's the simplest thing that can obviously work. Simplicity is a virtue when safety is involved. You can of course do sensor fusion and other more complicated things, but the core problem I outlined remains. > If you are so worried, override the AI in the moment. This is sneakily inserting a third set of sensors (your own). It can be a valid solution to the problem, but Waymo famously does not have a steering wheel you can just hop behind. This might seem like an edge case, but edge cases matter when failure might kill somebody. 1. https://space.stackexchange.com/questions/9827/if-the-space-... | | |
| ▲ | mafuy 4 days ago | parent | next [-] | | Voting is used when the systems are equivalent, e.g. 3 identical computers, where one might have a bit flip. This is completely different from systems that cover different domains, like vision and lidar. | |
| ▲ | sfifs 4 days ago | parent | prev [-] | | Isn't the historical voting pattern something more of a legacy thing dictated by limited edge compute of the past vs necessarily a best practice. I see in many domains a tendency to oversimplify decision making algorithms for human understanding convenience (eg vote rather that develop a joint probability distribution in this case, supply chain and manufacturing in particular seem to love rules of thumb) rather than use better algorithms that modern compute enables higher performance, safety etc | | |
| ▲ | AnIrishDuck 4 days ago | parent [-] | | This is an interesting question where I do not know the answer. I will not pretend to be an expert. I would suggest that "human understanding convenience" is pretty important in safety domains. The famous Brian Kernighan quote comes to mind: > Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it? When it comes to obscure corner cases, it seems to me that simpler is better. But Waymo does seem to have chosen a different path! They employ a lot of smart folk, and appear to be the state of the art for autonomous driving. I wouldn't bet against them. | | |
| ▲ | ImPostingOnHN 2 days ago | parent [-] | | Seatbelt mechanisms are complicated, airbag timing is complicated, let's just do away with them entirely in the name of simplicity? No, when it comes to not killing people, I'd say that safer is usually better. Remember the core function of the system is safety, simplicity is nice to have, but explicitly not as important. That said, beware of calling something 'complicated' just because you don't understand it, especially if you don't have training and experience in that thing. What's more relevant is whether the people building the systems think it is too complicated. |
|
|
| |
| ▲ | qingcharles 5 days ago | parent | prev [-] | | We're trying to build vehicles that are totally autonomous, though. How do you grab the wheel of the new Waymos without steering wheels? Especially if you're in the back seat staring at Candy Crush. | | |
| ▲ | esafak 5 days ago | parent [-] | | Waymos are safer, and drive more defensively than humans. There is no way a Waymo is going to drive aggressively enough to get itself into the trolley problem. |
|
| |
| ▲ | terribleperson 5 days ago | parent | prev | next [-] | | This situation isn't going to happen unless the vehicle was traveling at unsafe speeds to begin with. Cars can stop in quite a short distance. The only way this could happen is if the pedestrian was obscured behind an object until the car was dangerously close. A safe system will recognize potential hiding spots and slow down preemptively - good human drivers do this. | | |
| ▲ | AnIrishDuck 5 days ago | parent [-] | | > Cars can stop in quite a short distance. "Quite a short distance" is doing a lot of lifting. It's been a while since I've been to driver's school, but I remember them making a point of how long it could take to stop, and how your senses could trick you to the contrary. Especially at highway speeds. I can personally recall a couple (fortunately low stakes) situations where I had to change lanes to avoid an obstacle that I was pretty certain I would hit if I had to stop. | | |
| ▲ | terribleperson 4 days ago | parent [-] | | At the driving school I attended, they had us accelerate to 50 mph and then slam on the brakes so we'd have a feel for the distance (and the feel). While it's true they don't stop instantaneously at highway speeds, cars shouldn't be driving highway speeds when a pedestrian suddenly being in front of you is a realistic risk. | | |
| ▲ | AnIrishDuck 4 days ago | parent [-] | | What if the obstacle is not a person? What if something falls off a truck in front of the vehicle? What if wildlife spontaneously decides to cross the road (a common occurrence where I live)? I don't think these problems can just be assumed away. |
|
|
| |
| ▲ | cameldrv 4 days ago | parent | prev | next [-] | | You don't really ever have "two sensors" in the sense that it's two measurements. You have multiple measurements from each sensor every second. Then you accumulate that information over time to get a reliable picture. If the probability of failure on each frame were independent, it would be a relatively simple problem, but of course you're generally going to get a fairly high correlation from one frame to the next about whether or not there's a pedestrian in a certain location. The nice thing about having multiple sensing modalities is that the failure correlation between them is a lot lower. For example, say you have a pedestrian that's partially obscured by a car or another object, and maybe they're wearing a hat or a mask or wearing a backpack or carrying a kid or something, it may look unusual enough that either the camera or the lidar isn't going to recognize it as a person reliably. However, since the camera is generally looking at color, texture, etc in 2D, and the Lidar is looking at 3D shapes, they'll tend to fail in different situations. If the car thinks there's a substantial probability of a human in the driving path, it's going to swerve or hit the brakes. | |
| ▲ | consumer451 5 days ago | parent | prev | next [-] | | > > If multiple sensor modalities disagree, even without sensor fusion, you can at least assume something might be awry and drop into a maximum safety operation mode. > This is not always possible. You're on a two lane road. Your vision system tells you there's a pedestrian in your lane. Your LIDAR says the pedestrian is actually in the other lane. There's enough time for a lane change, but not to stop. > What do you do? Go into your failure mode. At least you have a check to indicate a possible issue with 2 signals. | | |
| ▲ | Mentlo 4 days ago | parent | next [-] | | I came here to write the same comment you did. What I’d suspect (I don’t work in self driving but I do in AI) is the issue is that this mode of operation would happen more often than not as the sensors disagree in critical ways more often than you’d think. So going “safety first” every time likely critically diminishes UX. The issue is not recognising that optimising for Ux at the expense of safety here is the wrong call, motivated likely by optimism and a desire for autonomous cars, more than reasonable system design. I.e. if the sensors disagree so often that it makes the system unusable, maybe the solution is “we’re not ready for this kind of technology and we should slow down” rather than “let’s figure out non-UX breaking edge case heuristics to maintain the illusion of autonomous driving being behind the corner”. Part of this problem is not even technological - human drivers tradeoff safety for UX all the time - so the expectation for self driving is unrealistic and your system has to have the ethically unacceptable system configuration in order to have any chance of competing. Which is why - in my mind - it’s a fools endeavour in personal car space, but not in public transport space. So go waymo, boo tesla. | |
| ▲ | ethbr1 4 days ago | parent | prev [-] | | Exactly my point. That you know the systems disagree is a benefit, compared to a single system. People are underweighting the alternative single system hypothetical -- what does a Tesla do when its vision-only system erroneously thinks a pedestrian is one lane over? |
| |
| ▲ | ranger_danger 5 days ago | parent | prev | next [-] | | > This is not always possible. You're on a two lane road. Your vision system tells you there's a pedestrian in your lane. Your LIDAR says the pedestrian is actually in the other lane. There's enough time for a lane change, but not to stop. This is why good redundant systems have at least 3... in your scenario, without a tie-breaker, all you can do is guess at random which one to trust. | | |
| ▲ | Someone1234 5 days ago | parent [-] | | That's a good point, but people do need to keep in mind that many engineered systems with three points of reference have three identical points of reference. That's why it works so well, a common frame of reference (i.e. you can compare via simple voting). For example jet aircraft commonly have three pitot static tubes, and you can just compare/contrast the data to look for the outlier. It works, and it works well. If you tried to do that with e.g. LIDAR, vision, and radar with no common point of reference, solving for trust/resolving disagreements is an incredibly difficult technical challenge. Other variations (e.g. two vision + one LIDAR), does not really make it much easier either. Tie-breaking during sensor fusion is a billion+ dollar problem, and will always be. |
| |
| ▲ | abraae 5 days ago | parent | prev [-] | | > Never go to sea with two chronometers; take one or three. |
| |
| ▲ | leoc 5 days ago | parent | prev [-] | | > If multiple sensor modalities disagree, even without sensor fusion, you can at least assume something might be awry and drop into a maximum safety operation mode. Also, this is probably when Waymo calls up a human assistant in a developing-country callcentre. | | |
| ▲ | ethbr1 4 days ago | parent [-] | | Saw that happen a week ago, actually. Non-sensor problem, but a Waymo made a slow right turn too wide, approached the left turning lane of cars, then safed itself by stopping, then remote assistance came online and extricated it. |
|
| |
| ▲ | jqpabc123 5 days ago | parent | prev | next [-] | | The same reason why Tesla needed vision-only to work (cost & timeline) But vision only hasn't worked --- not as promised, not after a decade's worth of timeline. And it probably won't any time soon either --- for valid engineering reasons. Engineering 101 --- *needing* something to work doesn't make it possible or practical. | |
| ▲ | ra7 5 days ago | parent | prev | next [-] | | The complexity argument rings hollow to me. It’s a bit like saying distributed databases are complex because you have to deal with CAP guarantees. Yes, but people still develop them because it has real benefits. It was maybe a valid argument 10 years ago, but in 2025 many companies have shown sensor fusion works just fine. I mean, Waymo has clocked 100M+ miles, so it works. The AV industry has moved on to more interesting problems, while Tesla and Musk are still stuck in the past arguing about sensor choices. | | |
| ▲ | leoc 5 days ago | parent [-] | | Well, it's more like sensor fusion plus extensive human remote intervention, it seems: https://www.nytimes.com/interactive/2024/09/03/technology/zo... . Mind you, if it takes both LiDAR and call-centre workers to make self-driving work in 2025 and for the foreseeable future, that makes Tesla's old ambition to achieve it with neither look all the more hopeless. | | |
| |
| ▲ | microtherion 5 days ago | parent | prev | next [-] | | > but then the challenge is reconciling disagreements with calibrated, and probabilistic fusion I keep reading arguments like this, but I really don't understand what the problem here is supposed to be. Yes, in a rule based system, this is a challenge, but in an end-to-end neural network, another sensor is just another input, regardless of whether it's another camera, LIDAR, or a sensor measuring the adrenaline level of the driver. If you have enough training data, the model training will converge to a reasonable set of weights for various scenarios. In fact, training data with a richer set of sensors would also allow you to determine whether some of the sensors do not in fact contribute meaningfully to overall performance. | |
| ▲ | overfeed 5 days ago | parent | prev | next [-] | | > cost & timeline It's really hard to accept cost as the reason when Tesla is preparing a trillion dollar package. I suppose that can be reconciled if one considers the venture to be a vehicle (ha!) to shovel as much money as possible from investors and buyers into Elon's pockets, I imagine the prospect of being the worlds first trillionare is appealing. | |
| ▲ | Earw0rm 5 days ago | parent | prev | next [-] | | There's no particular reason to use RGB for this kind of machine vision - cognition problem either. Infra-red of a few different wavelengths as well as optical light ranges seems like it'd give a superior result? | | | |
| ▲ | overfeed 5 days ago | parent | prev | next [-] | | > cost & timeline It's really hard to accept cost as the reason when Tesla is preparing a trillion dollar package. I suppose that can be reconciled if the venture is a vehicle (ha!) to shovel money from investors and buyers into Elon's pockets, I imagine the prospect of being the worlds first trillionare is appealing. | |
| ▲ | atcon 5 days ago | parent | prev | next [-] | | Your comments on sensor fusion seem to describe the weird results of 2 informal ADAS (lidar, vision, lidar + vision, lidar + vision + 4d imaging radar, etc.) “tournaments” conducted earlier this year. There was an earlier HN post about it <https://news.ycombinator.com/item?id=44694891> with a comment noting “there was a wide range of crash avoidance behavior even between the same car likely due to the machine learning, and that also makes explaining the differences hard. Hopefully someone with more background on ADAS systems can watch and post what they think.” Notably, sensor confusion is also an “unsolved” problem in humans, eg vision and vestibular (inner ear) conflicts possibly explaining motion sickness/vertigo <https://www.nature.com/articles/s44172-025-00417-2> The results of both tournaments: <https://carnewschina.com/2025/07/24/chinas-massive-adas-test...> Counterintuitively, vision scored best (Tesla Model X) The videos are fascinating to watch (subtitles are available):
Tournament 1 (36 cars, 6 Highway Scenarios): <https://www.youtube.com/watch?v=0xumyEf-WRI>
Tournament 2 (26 cars, 9 Urban Scenarios): <https://www.youtube.com/watch?v=GcJnNbm-jUI> Highway Scenarios: “tests...included other active vehicles nearby to increase complexity and realism”: <https://electrek.co/2025/07/26/a-chinese-real-world-self-dri...> Urban Scenarios: “a massive, complex roundabout and another segment of road with a few unsignaled intersections and a long straight...The first four tests incorporated portions of this huge roundabout, which would be complex for human drivers, but in situations for which there is quite an obvious solution: don’t hit that car/pedestrian in front of you” <https://electrek.co/2025/07/29/another-huge-chinese-self-dri...> | |
| ▲ | maxlin 5 days ago | parent | prev | next [-] | | I think you hit the nail on the head - Obviously when Tesla have saturated the potential of vision, they should bring in LiDAR if it can be reasonably added from a hardware point of view. Their current arguments make this clear - it would be surface-level thinking to add LiDAR and the kitchen sink now, complicating the system's evolution and axing scalability. But we're far from plateauing on what can be done with vision - Humans can drive quite well with essentially just sight, so we're far from extinguishing what can be done with it. | |
| ▲ | baby 5 days ago | parent | prev | next [-] | | Sure but if you see something in front of you but LIDAR says "nope I can see 500m away" then you know LIDAR is right | |
| ▲ | anthem2025 3 days ago | parent | prev [-] | | Are people under that impression or are you just repeating the sort of nonsense musk pushes about how sensor fusion is bad? |
| |
| ▲ | jmpman 3 days ago | parent | prev | next [-] | | Tesla has redundant front facing cameras on their cars. In my 2019 Model 3, there are three front facing cameras, each with varying angles of view, all three behind the rear view mirror, all encased in a small area lined with anti reflective material. Living in an extremely hot climate, that small area, with its anti reflective fuzz have degraded, depositing a film on the window, only in front of the cameras, obscuring all three cameras at the same time. Now, my Tesla just recently started complaining when the sensors were obscured with this deposit, but that wasn’t always the case. I used to be driving down the freeway with autopilot on, and it could barely track. Eventually I looked at the saved video footage and discovered my Tesla was virtually blind, while driving me down the freeway at 85mph. At least now, with recent updates, it warns me that it can’t see very well. However I question the resolution of the sensors. To drive legally in my state, you must have 20/40 vision. When I move my head around, I effectively have 20/40 vision all around my car. If I close 1 eye, I still have 20/40 vision. Does Tesla have effectively 20/40 vision in all 360 degrees? Maybe one of the front facing cameras has optical resolution equal to 20/40, but do the rest of them? I’m skeptical, and expect I’m being driven by what’s equivalent to a human who couldn’t pass the vision test, or at best, a human with just one eye that can pass the vision test. This isn’t even getting into redundancy in the electronics boards, connectivity from the electronics to the CPU, and redundancy in the processsing. We are being asked to put our faith/lives in these non redundant systems, but they’re not designed like Class-A flight critical systems on airplanes. | |
| ▲ | SoftTalker 5 days ago | parent | prev | next [-] | | > Vision can fail for a multitude of reasons --- sunlight, rain, darkness, bad road markers, even glare from a dirty windshield. And when it fails, there is no backup plan So like a human driver. Problem is, automatic drivers need to be substantially better than humans to be accepted. | | |
| ▲ | tarsinge 4 days ago | parent [-] | | Humans have a brain though. Current AI is nowhere near that as every engineer know it but common people seem to forget it with all the PR. |
| |
| ▲ | brandonagr2 4 days ago | parent | prev | next [-] | | Lidar is not a backup to vision, in a waymo both lidar and vision must be working, so you actually have less reliability as now you have two single points of failure. | |
| ▲ | Ocha 5 days ago | parent | prev [-] | | Yeap. Same mistake that Boeing did with making redundancy optional upgrade on max8. | | |
| ▲ | jqpabc123 5 days ago | parent [-] | | Another example of what happens when management starts making engineering decisions. |
|
|
|
| ▲ | CjHuber 5 days ago | parent | prev | next [-] |
| Just imagine Tesla would have subventioned passive LIDAR on every car they ship to collect data. Wow that dataset would be crazy, and would even improve their vision models by having ground truth to train on. He’s such a moron |
| |
| ▲ | nolist_policy 5 days ago | parent | next [-] | | This. It's also the reason Waymo is ahead, they have tons of high quality training data being constantly fed into their pipeline. | |
| ▲ | wombat-man 5 days ago | parent | prev | next [-] | | I think LIDAR was and maybe still is way more expensive. Initially running 75k. Now they're more around 10k which is better. | | |
| ▲ | hoytschermerhrn 5 days ago | parent | next [-] | | The new electric Volvos have LIDAR, proving that the technology has (at least now) approached mass-market feasibility. | | |
| ▲ | dzhiurgis 4 days ago | parent | next [-] | | A single car in US whose lidar is not operational yet and burns thru cameras? Wouldn’t call it success just yet. | |
| ▲ | hnburnsy 5 days ago | parent | prev [-] | | Ummm, it is actually active with ADAS anywhere? Certainly not in the US. >The EX90's LiDAR enhances ADAS features like collision mitigation and lane-keeping, which are active and assisting drivers. However, full autonomy (Level 3) is not yet available, as the Ride Pilot feature is still under development and not activated. |
| |
| ▲ | kibwen 5 days ago | parent | prev | next [-] | | This is off by orders of magnitude. BYD is buying LIDAR units for their cars for $140. | | |
| ▲ | onlyrealcuzzo 5 days ago | parent | next [-] | | That's likely closer to reality now, but that's not counting the cost for R&D to add it to the car, any additional costs that come with it besides the LIDAR hardware, plus the added cost to install it. All of that combined is probably closer to $1k than to $140. And, again, that's - what - 10 years after Tesla originally made the decision to go vision only. It wasn't a terrible idea at the time, but they should've pivoted at some point. They could've had a massive lead in data if they pivoted as late as 3 years ago, when the total cost would probably be under $2.5k, and that could've led to a positive feedback loop, cause they'd probably have a system better than Waymo by now. Instead, they've got a pile of garbage, and no path to improve it substantially. | | |
| ▲ | terribleperson 5 days ago | parent [-] | | I can't be sure, but I doubt Tesla is spending less than $140 on their cameras. High fidelity, high frame rate color cameras aren't actually cheap... | | |
| ▲ | onlyrealcuzzo 5 days ago | parent [-] | | Not all LIDARs are equal. Just because BYD is spending $140 on a LIDAR system does not mean it's the same quality as the Waymo system reported to cost $75k almost a decade ago, or, especially, the same quality as the ones in use today. They might be! But I doubt it. I don't know enough about Tesla's cameras, but it's not implausible to think there are LIDARs of low enough quality that you'd be better off with a good quality camera for your sensor. Again, I doubt this is the case with BYDs cameras. But it's still worth pointing out, I think. My point is, BYD's LIDAR system costing $x is only one small part of the conversation. | | |
| ▲ | lobsterthief 4 days ago | parent [-] | | I would say a $140 LIDAR system that’s currently being used in production cars [somewhere] is better than a $0 non-existent LIDAR system. Pair a cheap LIDAR system with some nice cameras and perhaps you can make up much of the difference in software. |
|
|
| |
| ▲ | wombat-man 4 days ago | parent | prev [-] | | Well maybe Tesla should adopt it then. |
| |
| ▲ | realo 5 days ago | parent | prev [-] | | My floor-cleaning robot has a lidar and i am pretty certain that part did not cost 10k$. | | |
| ▲ | peterfirefly 5 days ago | parent | next [-] | | It goes very slow and it doesn't need to work with high resolution or long distances. It has plenty of time to average out noise. Solid-state LIDAR is still a fairly new thing. LIDAR sensors were big, clunky, and expensive back when Tesla started their Autopilot/FSD program. I googled a bit and found a DFR1030 solid-state LIDAR unit for 267 DKK (for one). It has a field of view of 108 degrees and an angular resolution of 0.6 degrees. It has an angle error of 3 degrees and a max distance of 300mm. It can run at 7.5-28 Hz. Clearly fine for a floor-cleaning robot or a toy. Clearly not good enough for a car (which would need several of them). | |
| ▲ | wombat-man 4 days ago | parent | prev [-] | | well, probably different grades of lidar for different use cases. |
|
| |
| ▲ | CMay 5 days ago | parent | prev | next [-] | | Even if Tesla wasn't using LIDAR, I think they did still use radar and ultrasonic detection for a while, which I'm sure contributed to their models some. | |
| ▲ | 5 days ago | parent | prev [-] | | [deleted] |
|
|
| ▲ | amelius 5 days ago | parent | prev | next [-] |
| Your comparison to hallucination is spot on. LLMs have shown the general public how AI can be plain wrong and shouldn't be trusted for everything. Maybe this influences how they, and regulators, will think about self driving cars. |
| |
| ▲ | bbarnett 5 days ago | parent [-] | | Well I wish this was true. But loads of DEVs on here will claim LLMs are infallible. And the general public?! No way. Most are completely unaware of the foibles of LLMs. | | |
| ▲ | Cornbilly 5 days ago | parent | next [-] | | HN posters know better but a lot of them won’t be honest because they want to protect their investments and/or their employer. | |
| ▲ | jama211 5 days ago | parent | prev | next [-] | | No they don’t. Don’t lie. | |
| ▲ | BoiledCabbage 5 days ago | parent | prev [-] | | > Well I wish this was true. But loads of DEVs on here will claim LLMs are infallible. No the don't. You're making a straw man rather than trying to put forth an actual argument in support of your view. If you feel can't support your point, then don't try to make it. | | |
| ▲ | greenchair 5 days ago | parent | next [-] | | It's done in a roundabout way. Usually with a variation of "you had a bad experience because you are using the tool incorrectly, get good at prompting". | | |
| ▲ | Eisenstein 5 days ago | parent [-] | | That's a response to 'I don't get good results with LLMs and therefore conclude that getting good results with them is not possible'. I have never seen anyone claim that they make no mistakes if you prompt them correctly. |
| |
| ▲ | bbarnett 5 days ago | parent | prev [-] | | A straw man? An actual argument? I responded to this parent comment: "LLMs have shown the general public how AI can be plain wrong and shouldn't be trusted for everything." You take issue with my response of: "loads of DEVs on here will claim LLMs are infallible" You're not really making sense. I'm not straw-manning anything, as I'm directly discussing the statement made. What exactly are you presuming I'm throwing a straw man over? It's entirely valid to say "there are loads of supposed experts that don't see this point, and you're expecting the general public to?". That's clearly my statement. You may disagree, but that doesn't make it a strawman. Nor does it make it a poorly phrased argument on my part. Do pay better attention please. And your entire last sentence is way over the line. We're not on reddit. | | |
| ▲ | nickthegreek 5 days ago | parent | next [-] | | show us the infallible comments. | |
| ▲ | gellybeans 5 days ago | parent | prev [-] | | This is just subjective spew. The irony of telling someone not to be rude while being absolutely insufferable. Peak redditor behavior. Please provide examples. Thank you! |
|
|
|
|
|
| ▲ | danans 5 days ago | parent | prev | next [-] |
| > And why not just use LIDAR that can literally see around corners in 3D? Based on what I've read over the years: it costs too much for a consumer vehicle, it creates unwanted "bumps" in the vehicle visual design, and the great man said it wasn't needed. Yes, those reasons are not for technology or safety. They are based on cost, marketing, and personality (of the CEO and fans of the brand). |
| |
| ▲ | fooblaster 5 days ago | parent [-] | | Lidar is being manufactured in china in the volume of millions a year by robosense, Huawei, and hesai. Bom cost is on the order of a few hundred dollars - slightly more than automotive radar. The situation is a lot different in 2025 than in 2017. |
|
|
| ▲ | beng-nl 5 days ago | parent | prev | next [-] |
| I’ve always wondered about LiDAR - how can multiple units sweep a scene at the same time (as would be the case for multiple cars driving close together, all using lidar)? One unit can’t distinguish return signals between itself and other units, can it? |
| |
|
| ▲ | fossuser 5 days ago | parent | prev | next [-] |
| I use FSD in my Model S daily to commute from SF to Palo Alto along with most of my other Bay Area driving. It does a better job currently than most people and it drives me 95% of the time now I haven't had the phantom braking. I'm in a 2025 with HW4, but it's dramatic improvement over the last couple of years (previously had a 2018 Model 3) increased my confidence that Elon was right to focus on vision. It wasn't until late last year where I found myself using it more than not, now I use it almost every drive point to point (Cupertino to SF) and it does it. I think people are generally sleeping on how good it is and the politicization means people are under valuing it for stupid reasons. I wouldn't consider a non Tesla because of this (unless it was a stick shift sports car, but that's for different reasons). Their lead is so crazy far ahead it's weird to see this reality and then see the comments on hn that are so wrong. Though I guess it's been that way for years. The position against lidar was that it traps you in a local max, that humans use vision, that roads and signs are designed for vision so you're going to have to solve that problem and when you do lidar becomes a redundant waste. The investment in lidar wastes time from training vision and may make it harder to do so. That's still the case. I love Waymo, but it's doomed to be localized to populated areas with high-res mapping - that's a great business, but it doesn't solve the general problem. If Tesla keeps jumping on the vision lever and solves it they'll win it all. There's nothing in physics that makes that impossible so I think they'll pull it off. I'd really encourage people to here with a bias to dismiss to ignore the comments and just go in real life to try it out for yourself. |
| |
| ▲ | cpuguy83 5 days ago | parent | next [-] | | This is extremely narrow minded.
As another commenter pointed out, you are driving on easy mode in terms of environment and where a majority of the training was done. This is not a general solution, it is an SF one... at best. Most humans also don't get in accidents or have problems with phantom breaking within the timeframe that you mentioned. | | |
| ▲ | brandonagr2 4 days ago | parent | next [-] | | > Most humans also don't get in accidents Have you met any humans? Or seen people driving? | | |
| ▲ | cpuguy83 3 days ago | parent [-] | | Way to cut my sentence I half. That's not what I said and you know it. | | |
| |
| ▲ | fossuser 5 days ago | parent | prev [-] | | Oh please - people excuse and dismiss major accomplishments, you can send a skyscraper to mars and people on HN will still be calling you a fraud. The Bay Area has massive traffic, complex interchanges, SF has tight difficult roads with heavy fog. Sometimes there’s heavy rain on 280. 17 is also non trivial. What Tesla has done is not trivial and roads outside the bay are often easier. People can ignore this to serve their own petty cognitive bias, but others reading their comments should go look at it for themselves. | | |
| ▲ | Idesmi 3 days ago | parent | next [-] | | > you can send a skyscraper to mars and people on HN will still be calling you a fraud To date, SpaceX has sent nothing to Mars. Not to understate the company's accomplishment, but "people on HN" are fed up exactly with statements like yours. | | |
| ▲ | fossuser 3 days ago | parent [-] | | People here just whine and complain - yes they’ve “only” just sent a skyscraper to space for now and caught the booster on reentry, it’s a work in progress (along with their reusable rockets, earth scale telecom side project etc.) My point is people will still be calling him a fraud when they do get it to mars, no evidence is sufficient for the HN cynic that thinks their “above the fray” ethos makes them smart. Tesla has had massive success despite the haters, the model y becoming the literally best selling car on earth and you wouldn’t know it from HN. FSD has gotten really good, good enough to use more than not as they continue to improve it. The best thing about capitalism is the losers here don’t matter - the winners get rich and keep going. | | |
| ▲ | cpuguy83 3 days ago | parent [-] | | I said nothing about SpaceX here nor did I condemn Tesla ... or even mention Tesla. | | |
| ▲ | fossuser 3 days ago | parent [-] | | You downplayed what Tesla FSD can do and said I was being narrow minded and the Bay Area driving is "easy mode" and said vision isn't a general solution. I think none of this is true. |
|
|
| |
| ▲ | lightedman 5 days ago | parent | prev [-] | | I have ridden in many Tesla-based Ubers with human drivers using autopilot. Here outside of Los Angeles, about an hour east, they do not do well at all on their 'auto-pilot.' Your area has the benefit of being one of the primary training areas, and thus the dataset for your area is good. Try that here. I'll be more than happy to watch you piss yourself as the Tesla tries to take you into the HOV lane THROUGH THE BARRIERS. | | |
| ▲ | drak0n1c 5 days ago | parent [-] | | Auto-Pilot is not FSD. It's akin to a regular carmaker's Automatic-Braking-System and Lane-Keep-Assist. If you're seeing it used dangerously that's user error. |
|
|
| |
| ▲ | oblio 5 days ago | parent | prev | next [-] | | You're basically driving on easy mode, in the Bay Area. Dry climate, sunshine all year round, pretty solid developed country infrastructure. | |
| ▲ | a123b456c 5 days ago | parent | prev | next [-] | | OK so you believe "Elon was right" and people should "ignore the comments" Hmm very interesting. | |
| ▲ | nova22033 3 days ago | parent | prev | next [-] | | > politicization How is it politicization when TESLA THE COMPANY is saying Full Self Driving doesn't mean "Full" "Self" Driving? If it is as good as you claim, why doesn't Tesla claim it's Full Self Driving? | |
| ▲ | ponector 5 days ago | parent | prev | next [-] | | >> it drives me 95% of the time now But what is the point to use it everywhere if you still need to pay attention to the road, keep hands on the steering wheel? | | |
| ▲ | fossuser 4 days ago | parent [-] | | You don’t need hands on the wheel anymore, just looking out the window. It’s way more relaxed. It’ll be nice when that’s not required anymore, but even today it’s way more comfortable. |
| |
| ▲ | kolanos 5 days ago | parent | prev | next [-] | | HW4 is really a game changer. I was absolutely floored by HW4 FSD during a recent test drive. Tesla is accomplishing some truly groundbreaking technical achievements here. But you wouldn't know it through all the Elon Musk noise (pro and con). I'd encourage anyone to take a test drive and put FSD through its paces. I went in with a super critical mindset and walked away stunned. | | |
| ▲ | fossuser 5 days ago | parent | next [-] | | Yeah it’s amazing | |
| ▲ | anthem2025 3 days ago | parent | prev [-] | | I’m gonna go ahead and guess that by “super critical” you actually mean that you went in an Elon worshipper and left an Elon worshipper. |
| |
| ▲ | anthem2025 3 days ago | parent | prev | next [-] | | What lead? They are way behind Waymo. Why would anyone listen to the opinion of someone who bought a Tesla in 2025? The only people still buying them are musk fanboys. | |
| ▲ | pbhjpbhj 5 days ago | parent | prev [-] | | [flagged] | | |
| ▲ | fossuser 5 days ago | parent | next [-] | | Thank you for exemplifying what I’m talking about. I should really buy more TSLA. | |
| ▲ | sixQuarks 5 days ago | parent | prev [-] | | So he does nazi salutes and is totally buddy buddy with Netanyahu. Ok | | |
| ▲ | ksenzee 5 days ago | parent [-] | | Those two things are now compatible. Unthinkable for those of us who were around in the 20th century, but now true. | | |
| ▲ | tialaramex 5 days ago | parent [-] | | It also makes this horrible kind of sense that Elon would see them both as admirable, this idea that you're the only person who matters. Ordinary people exist only for you to exploit them, and have no intrinsic worth. |
|
|
|
|
|
| ▲ | teleforce 5 days ago | parent | prev | next [-] |
| >why not just use LIDAR that can literally see around corners in 3D? LIDAR requires line-of-sight (LoS) hence cannot see around conner, but RADAR probably can. It's interesting to note that the all time 2nd most popular post on Tesla is 9 years ago on its full self driving hardware (just 2nd after the controversial Cybertruck) [1]. >Elon's vision-only move was extremely "short-sighted" Elon's vision was misguided because some of the technologists at the time including him seem to really truly believed that AGI is just around the corner (pun attended). Now most of the tech people gave up on AGI claim blaming on the blurry definition of AGI but for me the truly killer AGI application is always full autonomous level 5 driving with only human level sensor perceptions minus the LIDAR and RADAR. But the complexity of the goal is very complicated that I really truly believe it will not be achieved in foreseeable future. [1] All Tesla Cars Being Produced Now Have Full Self-Driving Hardware (2016 - 1090 comments): https://news.ycombinator.com/item?id=12748863 |
|
| ▲ | UltraSane 5 days ago | parent | prev | next [-] |
| Camera only might work better if you used regular digital cameras along with more advanced cameras like event based cameras that send pixels as soon as they change brightness and have microsecond latency and\or Single Photon Avalanche Diode (SPAD) sensors which can detect single photons. Having the same footage from all 3 of these would enable some fascinating training options. But Tesla didn't do this. |
|
| ▲ | cuttothechase 5 days ago | parent | prev | next [-] |
| Cannot agree more on this phantom braking. I rented a Tesla a while back and drove from the bay to the death valley. On clear roads with no hazards whatsoever, the car hit the brakes at highway speeds. It scared the bejeesus out of me! Completely off put by the auto drive and derailed plans to buy a Tesla. |
|
| ▲ | duxup 5 days ago | parent | prev | next [-] |
| The around corners thing, when I saw demos of it seeing the vehicles the driver can't even see ... I wanted it for my non self driving car ... it's just too big of an advantage to skimp out on. |
|
| ▲ | JumpCrisscross 5 days ago | parent | prev | next [-] |
| > maybe there are filters to disregard unlikely objects as irrelevant, which act as guardrails against random braking The filters introduce the problem of incorrectly deleting something that really is there. |
|
| ▲ | paradox460 5 days ago | parent | prev | next [-] |
| Whats the oddest thing about the wiper tech is that we've had the tech for automated wipers since at least the 70s. As a kid my neighbor's Cadillac had it. tl;dr: you can use optics to determine if there's rain on a surface, from below, without having to use any fancy cameras or anything, just a light source and light sensor. If you're into this sort of thing, you can buy these sensors and use them as a rain sensor, either as binary "yes its rained" or as a tipping bucket replacement: https://rainsensors.com |
|
| ▲ | mellosouls 5 days ago | parent | prev | next [-] |
| Elon's vision-only move was extremely "short-sighted" (heheh) Careful. HN takes a dim view of puns. |
|
| ▲ | fred_is_fred 5 days ago | parent | prev | next [-] |
| "There are many reasons but that drives it home for me multiple times a week is that my Tesla's wipers will randomly sweep the windshield for absolutely no reason." Self-starting wipers uses some kind of current/voltage measure on the windshield right - unrelated to self-driving? It's been around longer than Tesla - or are you just saying it's another random failure? |
|
| ▲ | debo_ 5 days ago | parent | prev | next [-] |
| I upvoted this just for "short-sighted." |
|
| ▲ | diebeforei485 5 days ago | parent | prev | next [-] |
| How does lidar see around corners? |
| |
| ▲ | xnx 3 days ago | parent [-] | | Waymo sensor pods are mounted at the corners of the vehicle allowing it to see things that a driver can't. (E.g. when pulling out of an alley) |
|
|
| ▲ | aaron695 5 days ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | maxlin 5 days ago | parent | prev | next [-] |
| This and that. FUD this FUD that. Tesla have communicated clearly why "adding" LiDAR isn't an improvement for a system with goals as high as their are. Remember, no vision system yet is as good as humans are with vision, so obviously there's a lot to do with vision still. Check this for a reference of how well Tesla's vision-only fares against the competition, where many have LiDAR. Keep it simple wins the game.
https://www.youtube.com/watch?v=0xumyEf-WRI |
| |
| ▲ | FireBeyond 5 days ago | parent [-] | | Is this like the same BS as Elon on an investor call recently? One analyst asked about the reliability of Tesla’s cameras when confronting sun glare, fog, or dust. Musk claimed that the company’s vision system bypasses image processing and instead uses direct photon counting to account for “noise” like glare or dust. This... is horseshit. Photon counting is not something you can do with a regular camera or any camera installed on a Tesla. A photon counting camera doesn't produce imagery that is useful for vision. Even beyond that, it requires a closed environment so that you can you know, count them in a controlled manner, not an open outside atmosphere. It's bullshit. And Elon knows it. He just thinks that you are too stupid to know it and instead think "Oh, yeah, that makes sense, what an awesome idea, why is only Tesla doing this?" and are wowed by Elon's brilliance. | | |
| ▲ | maxlin 3 days ago | parent [-] | | They referred to using 10 bit / unprocessed sensor data bypassing the normal processing with that terminology. But go ahead fight some weird strawman you built. Did you even look at the video? Don't think you did. |
|
|
|
| ▲ | moralestapia 5 days ago | parent | prev | next [-] |
| >Elon's vision-only move was extremely "short-sighted" It wasn't Elon's but Karpathy's. |
| |
| ▲ | Fricken 5 days ago | parent | next [-] | | Sterling Anderson was the first autopilot director, and he was fired for insisting on Lidar. Elon sued Sterling Anderson, then hired the bootlick Karpathy to help him grease chumps. | | |
| ▲ | mcv 5 days ago | parent [-] | | But why is Elon so opposed to Lidar? I don't get it. | | |
| ▲ | fossuser 5 days ago | parent | next [-] | | He argued the case in 2016 iirc. The position against lidar was that it traps you in a local max, that humans use vision, that roads and signs are designed for vision so you're going to ultimately have to solve that problem and when you do lidar becomes a redundant waste. The investment in lidar wastes time from training vision and may make it harder to do so. That's still the case. I love Waymo, but it's doomed to be localized to populated areas with high-res mapping - that's a great business, but it doesn't solve the general problem. If Tesla keeps jumping on the vision lever and solves it they'll win it all. There's nothing in physics that makes that impossible so I think they'll pull it off. His model is all this sort of first principles thinking, it's why his companies pull off things like starship. I wouldn't bet against it. | | |
| ▲ | Applejinx 5 days ago | parent | next [-] | | If humans had radar they would reverse into obstacles less often, and not be blindsided or T-boned as readily so long as their radar could still reach the object moving rapidly in their direction. Elon is being foolish and weirdly anthropomorphic. | | |
| ▲ | fossuser 4 days ago | parent [-] | | If humans had ten eyes always looking simultaneously and never got tired they would also not hit stuff. |
| |
| ▲ | moralestapia 5 days ago | parent | prev | next [-] | | And yet ... Tesla is backtracking (read TFA) while Waymo is steadily getting there. | |
| ▲ | 5 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | 5 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | anthem2025 3 days ago | parent | prev [-] | | It’s amazing people heard that argument and didn’t immediately write him off as having zero clue what he’s talking about. |
| |
| ▲ | Fricken 5 days ago | parent | prev [-] | | At that time Lidar was too expensive and ugly to be putting in every car. Robust Lidar for SAE level 4 autonomous vehicles is still not cheap and still pretty ugly. | | |
| ▲ | polishdude20 5 days ago | parent [-] | | But that's what's needed. Tesla could have developed an effective and cheap lidar if they decided millions of their cars needed it. | | |
| ▲ | Fricken 4 days ago | parent [-] | | Not too long ago there were over 5 dozen startups in the automotive grade lidar space. Lidar is now much cheaper and smaller, but still very conspicuous and still too expensive to be putting in every vehicle. |
|
|
|
| |
| ▲ | pinkmuffinere 5 days ago | parent | prev | next [-] | | For decisions of this scale (ie, tens of years of development time, committing multiple products to a single unproven technology), the CEO really should be involved. Maybe they’ll just decide to take the recommendation of the SMEs, but it’s hard for me to imagine Elon had no say in it. | |
| ▲ | amelius 5 days ago | parent | prev [-] | | I suspect so too, but is it factual? |
|
|
| ▲ | qoez 5 days ago | parent | prev | next [-] |
| Not sure it was actually Elon's move though, I heard it was mainly a decision taken by Andrej Karpathy |
| |
|
| ▲ | weinzierl 5 days ago | parent | prev | next [-] |
| I think Elon's prediction was that LIDAR was too expensive and will stay too expensive.
In a sense he was right, LIDAR prices did not drop and I wonder why that is? |
| |
| ▲ | exhilaration 5 days ago | parent | next [-] | | There's multiple comments in this thread pointing to Chinese car manufacturers paying under $200 for their LIDAR hardware. | | |
| ▲ | mensetmanusman 5 days ago | parent | next [-] | | Price without specs per radian is meaningless. | |
| ▲ | weinzierl 5 days ago | parent | prev [-] | | $200 is still a lot when a bunch of cameras cost maybe $20. | | |
| ▲ | ra7 5 days ago | parent | next [-] | | $200 to enable better FSD vs a decade of struggle to get FSD only partially working with $20 cameras. Which one do you think is more expensive overall? | | |
| ▲ | weinzierl 5 days ago | parent | next [-] | | The fact that we still do not have a significant number of cars with LIDAR on our streets somewhat proves which approach the auto industry considers viable for business. I am much more curious about the next ten years. If we can bring down the cost of a LIDAR unit into parity with camera systems[1], I think I know the answer. But I thought that 10 years ago and it did not happen so I wonder what is the real roadblock to make LIDAR cheap. [1] Which it won't replace, of course. What it will change is that it makes the LIDAR a regular component, not an exceptionally expensive component. | | |
| ▲ | xnx 3 days ago | parent [-] | | The fact that the only working self driving system uses LIDAR might say even more. |
| |
| ▲ | xnx 3 days ago | parent | prev [-] | | 1) make it work 2) make it right 3) make it fast (or cheap in this case) Elon thinks his genius intellect allows him to skip to #3. |
| |
| ▲ | D-Coder 5 days ago | parent | prev | next [-] | | > $200 is still a lot when a bunch of cameras cost maybe $20. Anything except the lowest end car will cost $20K or more, so $200 is one percent of that price. | |
| ▲ | suddenexample 5 days ago | parent | prev [-] | | I mean, I'd rather be building a $30,200 robotaxi that works than a $30,020 robotaxi that doesn't. | | |
| ▲ | weinzierl 5 days ago | parent [-] | | You won't get rich with a $30,200 robotaxi, you won't even have a viable business. The game is the mass market and there the usual unit of currency is not cents, its tenth of cents. | | |
| ▲ | anthem2025 3 days ago | parent [-] | | Even if your robotaxi only manages 2000 rides that’s still down to just 10 cents a ride to cover the cost of hardware. It’s nothing. | | |
| ▲ | weinzierl 2 days ago | parent [-] | | All taxis are variants of mass produced cars, that will not be different for robotaxis. The mass market is the enabler and there every tenth of cent counts. |
|
|
|
|
| |
| ▲ | yndoendo 5 days ago | parent | prev [-] | | Investments into re-engineering production to bring down cost is done when there is a market large enough for said product. True self-driving is still a baby that needs to grow and cannot even compete against an adult human with 30+ years of experience. As self driving actually forms to that level the market will grown. | | |
| ▲ | fooblaster 5 days ago | parent [-] | | Why do all of you think prices haven't come down? I can buy an AT128 from hesai for a few hundred dollars in volume. It's higher performance than any spinning lidar I could buy in 2017. | | |
| ▲ | yndoendo 4 days ago | parent | next [-] | | You may have interpreted that and that is not what I said. Once a product starts to sell after initial design, time is take to reduce the development cost. Try to reuse parts or replace part A with B. A machine from early 2018 can be little different than ones going out the door late 2018. _Kaizen_ was coined for this. My point of view of when mass reduction in cost will be when self-driving is cost effect secondary feature on all Toyota vehicles. I see that as the litmus test for knowing that self-driving has reached true utility. Also well designed vehicles would need a multi-sensor system to operate in self-driving mode. A human operating a car is using multi-sensor intake. Lack of multi-sensor in humans prevent them from operating a vehicle. Blind people need a secondary sensory input like walking stick. Vehicles need a multi-sensor system to prevent harming, mutilating, and killing the passengers and pedestrians. | |
| ▲ | weinzierl 5 days ago | parent | prev [-] | | Elon's bet was one LIDAR against a bunch of cameras. A few hundred dollars is still way too much when you can get the cameras for a few tens. | | |
| ▲ | Applejinx 5 days ago | parent [-] | | In what universe is 'a few hundred dollars is way too much' for implementing full self-driving on an autonomous vehicle that moves like, and at the speeds of and in the spaces of, an automobile? A two to four ton vehicle that can accelerate like a Ferrari and go over 100 mph, fully self-driving, and 'a few hundred dollars is way too much'. Disagree. Even as they are dialing back the claims, which may or may not affect how people use the vehicles. These things respond too quickly for flaky senses based on human sensoriums. |
|
|
|
|
|
| ▲ | enslavedrobot 5 days ago | parent | prev | next [-] |
| Are you referring to autopilot or FSD? Phantom braking is a solved problem since the release of V12 FSD. As soon as a vision based car is safer than a human, it's flaws don't matter because it will save lives. Supervised FSD is already safer than a human. |
| |
| ▲ | canadaduane 5 days ago | parent [-] | | "Just git pull, and latest fixes it" is not reassuring in this context. Engineers evaluating your claims need real data, not marketing copy. | | |
| ▲ | enslavedrobot 3 days ago | parent [-] | | FSD is rigorously tested before release. The revisions and updates are safety tested on roads for months before they are released. Tesla also has models that are too big to run on existing production hardware that perform better than the release versions in test cars. Updates are not git pulls and no engineer would ever think that they were. |
|
|
|
| ▲ | jillesvangurp 5 days ago | parent | prev | next [-] |
| Lidar is great for object detection. But it's not great for interpreting the objects.
It will stop you crashing into a traffic light. But it won't be able to tell the color of the light. It won't see the stripes on the road. It won't be able to tell signs apart. It won't enable AIs to make sense of the complex traffic situations. And those complex traffic situations are the main challenge for autonomous driving. Getting the AIs to do the right things before they get themselves into trouble is key. Lidar is not a silver bullet. It helps a little bit, but not a whole lot. It's great when the car has to respond quickly to get it out of a situation that it shouldn't have been in to begin with. Avoiding that requires seeing and understanding and planning accordingly. |
| |
| ▲ | amelius 5 days ago | parent | next [-] | | Meanwhile, the competition who is using LiDAR has FSD cars. You're understating the importance of this sensor. You can train a DL model to act like a LiDAR based on only camera inputs (the data collection is easy if you already have LiDAR cars driving around). If they could get this to work reliably, I'm sure the competition would do it and ditch the LiDAR, but they don't, so that tells us something. | | |
| ▲ | SOLAR_FIELDS 5 days ago | parent | next [-] | | It is very true and worthwhile to point out that the only company deploying L4 at scale is using LIDAR. And that company is not Tesla | | |
| ▲ | UltraSane 5 days ago | parent [-] | | The mental gymnastics Tesla fanboys use to explain this away are incredible. | | |
| ▲ | happyPersonR 5 days ago | parent [-] | | The Tesla social media team actively used to post positive spin on comment threads. It wouldn’t surprise me if they have LLM doing this now. |
|
| |
| ▲ | ModernMech 5 days ago | parent | prev | next [-] | | Researchers had this knowledge in 2007, when the only cars to finish the DARPA Urban challenge were equipped with Velodyne 3D LIDAR. Elon Musk sent us back a decade by using his platform to ignorantly convince everyone it was possible with just camera alone. For anyone who understands sensor fusion and the Kalman filter, read this and ask yourself if you trust Elon Musk to direct the sensor strategy on your autonomous vehicle: https://www.threads.com/@mdsnprks/post/DN_FhFikyUE For anyone wondering, to a sensors engineer the above tweet is like sayin 1 + 1 = 0 -- the truth (and science) is the exact opposite of what he's saying. | |
| ▲ | cmiles74 5 days ago | parent | prev [-] | | Isn’t this article about Tesla admitting their system is as good as it’s going to get? They’re changing their definition of FSD to pretty much “current state”. |
| |
| ▲ | michaelt 5 days ago | parent | prev | next [-] | | I think you might be under-estimating the importance of not hitting things. If you look at the statistics on fatal car accidents, 85%+ involve collisions with stationary objects or other road users. Nobody's suggesting getting rid of machine vision or ML - just that if you've got an ML+vision system that gets in 1 serious accident per 200,000 miles, adding LIDAR could improve that to 1 serious accident per 2,000,000 miles. | | |
| ▲ | ModernMech 5 days ago | parent [-] | | Because LIDAR can detect the object at the beginning of the perception pipeline, whereas camera can only detect the object after an expensive and time consuming ML inference process. By the time the camera even knows there's an object (if it does at all) the LIDAR would have had the car hitting its brakes. When you're traveling 60 MPH, milliseconds matter. | | |
| ▲ | losvedir 5 days ago | parent [-] | | Just to put numbers on it, 10ms at 60mph is just under a foot. I don't think that matters too much, but if we're talking 200ms that's 10-15 ft which is substantial. I have no idea how long the ML pipeline is, though. |
|
| |
| ▲ | HPsquared 5 days ago | parent | prev | next [-] | | It's an extra sensor you'd add into the mix, you'd still have cameras. Like the radar sensors. I think the reason Teslas don't have it, is because the sensor hardware was expensive a few years back. I assume they are much cheaper now. | | |
| ▲ | ndsipa_pomu 5 days ago | parent [-] | | Tesla have also backed themselves into a corner by declaring that older models are hardware capable of FSD, so they can't easily add LIDAR to new models without being liable for upgrading/refunding previously sold models. | | |
| ▲ | gizajob 5 days ago | parent | next [-] | | New for 2027 - ABSOLUTE Self Driving Pro Max! | |
| ▲ | bbarnett 5 days ago | parent | prev [-] | | I thought they had it on some models already, then removed it on models after? edit: no, it was ultrasonic sensors. But this was likely object detection, and now it's gone. |
|
| |
| ▲ | 5 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | mirsadm 5 days ago | parent | prev [-] | | I don't know about you but one of my primary goals when driving is not hitting into things |
|
|
| ▲ | alex1138 5 days ago | parent | prev | next [-] |
| I've defended some of Musk because I think what he did for Twitter was completely necessary (showing Jay Bhattacharya that the old regime had put him on a trends blacklist, and all the other people who got banned for no reason) but things like this (and Tesla's already been accused of killing people through crashes) are alarming (vision only as opposed to multiple telemetry) and it's kind of amazing he's in charge of something like SpaceX (are we about to witness a fatal incident in space?) |
| |
| ▲ | oblio 5 days ago | parent [-] | | > showing Jay Bhattacharya that the old regime had put him on a trends blacklist, and all the other people who got banned for no reason He's doing the exact same thing and worse to people he doesn't like. | | |
| ▲ | alex1138 3 days ago | parent [-] | | I think it's amazing how I can get downvoted for the smallest of things What you say is plausible, I haven't directly seen the evidence for it but I'm not inclined to completely doubt you But I think back to the time where everything surrounding covid was 'misinformation' and Musk (even if the broken clock is right twice a day) genuinely gave people a place to speak. Old Twitter would shut down people, so fast (even experts) | | |
|
|
|
| ▲ | gcanyon 5 days ago | parent | prev | next [-] |
| The wiper system has nothing to do with self-driving -- it's based on total internal reflection in the glass: https://www.youtube.com/watch?v=TLm7Q92xMjQ |
| |
| ▲ | sean_bright 5 days ago | parent | next [-] | | Teslas do not use the rain sensors discussed in this video, they use cameras to detect rain. | | |
| ▲ | gcanyon 5 days ago | parent [-] | | Oh good lord, why? This is a solved problem, why would they waste their time on it. Wait, I think I know the answer: Elon's famous (at SpaceX) for saying the most reliable part is no part. So maybe this is a consequence of that. In any case, thanks, TIL! |
| |
| ▲ | vel0city 5 days ago | parent | prev [-] | | That's how every non-Tesla works. Tesla's don't do this method which is why their auto wipers have always been so bad compared to everyone else. | | |
|
|
| ▲ | torginus 5 days ago | parent | prev | next [-] |
| The mistakes you describe are the issues of the AI system controlling the car, not of the cameras themselves. If you were watching the camera feed and teleoperating the vehicle, no way you'd phantom brake at a sudden bit of glare. |
| |
| ▲ | petee 5 days ago | parent | next [-] | | Going from cameras to the human model, every morning on my way to work humans suddenly slam their brakes for the sun in their eyes: if you can't see, you can't see. I think it's another good example why cameras are not enough alone. | |
| ▲ | nosianu 5 days ago | parent | prev [-] | | OP says nothing else??? > this tells me volumes about what's going on in the computer vision system Emphasis: > computer vision system |
|
|
| ▲ | chippiewill 5 days ago | parent | prev | next [-] |
| As someone who worked in this space, you are absolutely right, but also kind of wrong - at least in my opinion. The cold hard truth is that LIDARs are a crutch, they're not strictly necessary. We know this because humans can drive without a LIDAR, however they are a super useful crutch. They give you super high positional accuracy (something that's not always easy to estimate in a vision-only system). Radars are also a super useful crutch because they give really good radial velocity. (Little anecdote, when we finally got the Radars working properly at work it made a massive difference to the ability for our car to follow other cars, ACC, in a comfortable way). Yes machine learning vision systems hallucinate, but so do humans. The trick for Tesla would be to get it good enough to where it hallucinates less than humans do (they're nowhere near yet - human's don't hallucinate very often). It's also worth adding that last I checked the state of the art for object detection is early fusion where you chuck the LIDAR and Radar point clouds into a neural net with the camera input so it's not like you'd necessarily have the classical methods guardrails with the Lidar anyway. Anyway, I don't think Tesla were wrong to not use LIDAR - they had good reasons to not go down that route. They were excessively expensive and the old style spinning LIDARs were not robust. You could not have sold them on a production car in 2018. Vision systems were improving a lot back then so the idea you could have a FSD on vision alone was plausible. |
| |
| ▲ | raincole 5 days ago | parent | next [-] | | > The cold hard truth is that LIDARs are a crutch The hard truth is there is no reason to limit machines to only the tools humans are biologically born with. Cars always have crutches that humans don't possess. For example, wheels. | | |
| ▲ | dcchambers 5 days ago | parent | next [-] | | Exactly. In a true self-driving utopia, all of the cars are using multiple methods to observe the road and drive (vision, lidar, GPS, etc) AND they are all communicating with each other silently, constantly, about their intentions and status. Why limit cars to what humans can do? | |
| ▲ | mensetmanusman 5 days ago | parent | prev | next [-] | | The hard truth is you are balancing cost benefit curves. | |
| ▲ | daveguy 5 days ago | parent | prev | next [-] | | The "lidar is a crutch" excuse is such a fraud. Musk is doing it so he can make more money, because it's cheaper. Thats it. Just another sociopath billionaire cutting corners at the expense of safety. The reason this is clear is because, except for a brief period in late 2022, Teslas have included some combination of radar and ultrasonic sensors. [0] [0] https://en.m.wikipedia.org/wiki/Tesla_Autopilot_hardware | |
| ▲ | profunctor 5 days ago | parent | prev [-] | | The reason is cost, LIDAR is expensive. | | |
| ▲ | kibwen 5 days ago | parent | next [-] | | This information is out of date. LIDAR costs are 10x less than they were a decade ago, and still falling. Turns out, when there's demand for LIDAR in this form factor, people invest in R&D to drive costs down and set up manufacturing facilities to achieve economies of scale. Wow, who could have predicted this‽ | |
| ▲ | throwaway31131 5 days ago | parent | prev | next [-] | | Cost is relative. LIDAR maybe be expensive relative to a camera or two but it’s very inexpensive compared to hiring a full time driver. Crashes aren’t particularly cheap either. Neither are insurance premiums. | |
| ▲ | DennisP 5 days ago | parent | prev | next [-] | | Huawei has a self-driving system that uses three lidars, which cost $250 each (plus vision, radar, and ultrasound). It appears to work about as well as FSD. Here's the Out of Spec guys riding around on it in China for an hour: https://www.youtube.com/watch?v=VuDSz06BT2g | | |
| ▲ | mensetmanusman 5 days ago | parent [-] | | Huawei received over $1 billion in grants from the Chinese government in 2023. Western countries might not be smart enough to keep R&D because Wall Street sees it as a cost center. | | |
| |
| ▲ | ModernMech 5 days ago | parent | prev | next [-] | | You know what used to be expensive? Cameras. Then people started manufacturing them for mass market and cost when down. You know what else used to be expensive? Structured light sensors. They cost $$$$ in 2009. Then Microsoft started manufacturing the Kinect for a mass market, and in 2010 price went down to $150. You know what's happened to LIDAR in the past decade? You guessed it, costs have come massively down because car manufacturers started buying more, and costs will continue to come down as they reach mass market adoption. The prohibitive cost for LIDAR coming down was always just a matter of time. A "visionary" like Musk should have been able to see that. Instead he thought he could outsmart everyone by using a technology that was not suited for the job, but he made the wrong bet. | | |
| ▲ | jqpabc123 5 days ago | parent [-] | | but he made the wrong bet. This should be expected when someone who is *not* an experienced engineer starts making engineering decisions. |
| |
| ▲ | zbrozek 5 days ago | parent | prev | next [-] | | It's not 2010 anymore. They will asymptotically reach approximately twice the price of a camera, since they need both a transmit and receive optical path. Right now the cheapest of the good LiDARs are around 3-4x that. So we're getting close, and we're already within the realm large-scale commercial viability. | |
| ▲ | uoaei 5 days ago | parent | prev [-] | | That's ok, they're supposed to be. That's no excuse to rush a bad job. | | |
| ▲ | revnode 5 days ago | parent [-] | | The point of engineering is to make something that’s economically viable, not to slap together something that works. Making something that works is easy, making something that works and can be sold at scale is hard. | | |
| ▲ | uoaei 5 days ago | parent | next [-] | | That's not engineering, that's industry. It's important to distinguish the two. | | |
| ▲ | revnode 5 days ago | parent [-] | | Engineering only exists within industry. Everything else is a hobby. | | |
| ▲ | uoaei 5 days ago | parent [-] | | That's simply not true. Engineering can exist outside industry. "Stuff costs money" is not a governing aspect of these kinds of things. FOSS is the obvious counterexample to your absurdly firm stance, but so are many artistic pursuits that use engineering techniques and principles, etc. | | |
| ▲ | revnode 4 days ago | parent [-] | | Industry includes FOSS and artistic endeavors, anything that’s done professionally. My intent was to exclude research efforts, which is fundamentally different from engineering, which is a practical concern and not a “get it to just work” concern. | | |
| ▲ | uoaei 4 days ago | parent [-] | | That's an interesting question, the question of whether engineering per se is strictly pragmatic. I personally think drawing a hard line between research and engineering is a misstep and relies too heavily on a bureaucratic kind of metaphysics. |
|
|
|
| |
| ▲ | waldarbeiter 5 days ago | parent | prev [-] | | If it would be easy there would already be a car costing a few million that few can afford but that has solved AD. But there isn't. | | |
| ▲ | revnode 5 days ago | parent [-] | | There is no market for such a thing. At that price point, you get a personal chauffeur. That’s what rich people do and he can do stuff that a self driving system never can. | | |
| ▲ | tialaramex 5 days ago | parent [-] | | And the rich people who don't want a chauffeur like driving the car. They will buy a $10M car no problem, but they want driving that car to be fun because that's what they were paying for. They don't want you to make the driving more automatic and less interesting. |
|
|
|
|
|
| |
| ▲ | hudon 5 days ago | parent | prev | next [-] | | > they're not strictly necessary. We know this because humans can drive without a LIDAR and propellers on a plane are not strictly necessary because birds can fly without them? The history of machines show that while nature can sometimes inspire the _what_ of the machine, it is a very bad source of inspiration for the _how_. | | |
| ▲ | ethbr1 5 days ago | parent [-] | | Turns out intelligent design is quicker than evolutionary algorithms. ;) |
| |
| ▲ | goalieca 5 days ago | parent | prev | next [-] | | > The cold hard truth is that LIDARs are a crutch, they're not strictly necessary. We know this because humans can drive without a LIDAR, however they are a super useful crutch. Crutch for what? AI does not have human intelligence yet and let’s stop pretending it does. There is no shame in that as the word crutch implies. | | |
| ▲ | spot5010 5 days ago | parent | next [-] | | I've never understood the argument against lidars (except cost, but even that you can argue can come down). If a sensor provides additional data, why not use it? Sure, humans can drive withot lidars, but why limit the AI to using human-like sensors? Why even call it a crutch? IMO It's an advantage over human sensors. | | |
| ▲ | bayindirh 5 days ago | parent | next [-] | | > Sure, humans can drive without LIDARs... That's because our stereoscopic vision has infinitely more dynamic range, focusing speed and processing power w.r.t. a computer vision system. Periphery vision is very good at detecting movement, and central view can process tremendous amount of visual data without even trying. Even a state of the art professional action camera system can't rival our eyes in any of these categories. LIDARs and RADARs are useful and shall be present in any car. This is the top reason I'm not considering a Tesla. Brain dead insistence on cameras with small sensors only. | | |
| ▲ | iknowstuff 5 days ago | parent [-] | | their cams have better dynamic range than your eyes, given they can just run multiexposure and u gotta squint for sunlight. focal point is infinite for driving. You’re not considering them even though they have the best adas on the market lmao suit yourself https://m.youtube.com/watch?v=2V5Oqg15VpQ |
| |
| ▲ | IgorPartola 5 days ago | parent | prev [-] | | I don’t work in this field so take the grain of salt first. Quality of additional data matters. How often does a particular sensor give you false positives and false negatives? What do you do when sensor A contradicts sensor B? “3.6 roentgen, not great, not terrible.” | | |
| ▲ | giveita 5 days ago | parent [-] | | You can say that about human hearing and balance. What if they conflict with visual? We are good at figuring it out. | | |
| ▲ | ben_w 5 days ago | parent | next [-] | | We throw up, an evolved response because that conflict is a symptom of poisonous plants messing with us. | |
| ▲ | IgorPartola 5 days ago | parent | prev [-] | | Humans can be confused in a number of ways. So can AI. The difference is that we know pretty well how humans get confused. AI gets confused in novel and interesting ways. | | |
| ▲ | giveita 5 days ago | parent [-] | | Does removing a sense help in that regard (for car driving?). Probably comes down to lidar (and Ai) failure modes. | | |
| ▲ | IgorPartola 5 days ago | parent [-] | | I suspect it helps engineering the system. If you have 30 difference sensors, how do you design a system that accounts for seemingly random combinations of them disagreeing with an observation in real time if a priori you don’t know the weight of their observation in that particular situation? For humans for example you know that in most cases seeing something in a car is more important than smelling something. But what if one of your eyes sees a pedestrian and another sees a shadow of a bird? Also don’t forget that as a human you can move your head any which way, and also draw on your past experiences driving in that area. “There is always an old man crossing the road at this intersection. There is a school nearby so there might be kids here at 3pm.” That stuff is not as accessible to a LIDAR. |
|
|
|
|
| |
| ▲ | lazide 5 days ago | parent | prev [-] | | I think they meant crutch for the AI so they could pretend for investors that AGI is right around the corner haha |
| |
| ▲ | jfim 5 days ago | parent | prev | next [-] | | LIDARs have the advantage that they allow detecting solid objects that have not been detected by a vision-only system. For example, some time ago, a Tesla crashed into an overturned truck, likely because it didn't detect it as an obstacle. A system that's only based on cameras is only as good as its ability to recognize all road hazards, with no fall back if that fails. With LIDAR, the vehicle might not know what's the solid object in front of the vehicle using cameras, but it knows that it's there and should avoid running into it. | | |
| ▲ | sandworm101 5 days ago | parent [-] | | Solid objects that arent too dark or too shiny. Lidar is very bad at detecing mirrored surfaces or non-reflecting structures that absorb the paticular frequency in use. The back ends of trucks hauling liquid are paticularly bad. Block out the bumper/wheels, say by a slight hill, and that polished cone is invisible to lidar. | | |
| ▲ | bayindirh 5 days ago | parent | next [-] | | Add one or a couple of RADAR(s), too. European cars use this one weird trick to enable tons of features without harming people or cars. | |
| ▲ | UltraSane 5 days ago | parent | prev [-] | | LIDAR works be measuring the time it takes for light to return so I don't understand how a object can be too reflective. Objects that absorb the specific wavelength the LIDAR uses is an obvious problem. | | |
| ▲ | sandworm101 5 days ago | parent [-] | | Too reflective, like a flat mirror, will send the light off in a random direction rather than back as the detector. Worse yet, things like double reflections can result in timing errors as some of the signal follows a longer path. You want a target that is nicely reflective but not so shiny that you get any double reflections. The ideal is a matte surface painted the same color as the laser. | | |
| ▲ | UltraSane 5 days ago | parent [-] | | Ah it relies on diffuse reflections to guarantee some light returns to the sensor but specular reflections mean none is returned. This is a good example of why sensor fusion is good. |
|
|
|
| |
| ▲ | lazide 5 days ago | parent | prev | next [-] | | The big promise of autonomous self-driving was that it would be done safer than humans. The assumption was that with similar sensors (or practically worse - digital cameras score worse than eyeballs in many concrete metrics), ‘AI’ could be dramatically better than humans. At least with Tesla’s experience (and with some fudging based on things like actual fatal accident data) it isn’t clear that is actually what is possible. In fact, the systems seem to be prone to similar types of issues that human drivers are in many situations - and are incredibly, repeatedly, dumb in some situations many humans aren’t. Waymo has gone full LiDAR/RADAR/Visual, and has had a much better track record. But their systems cost so much (or at least used to), that it isn’t clear the ‘replace every driver’ vision would ever make sense. And that is before the downward pressure on the labor market started to happen post-COVID, which hurts the economics even more. The current niche of Taxis kinda makes sense - centrally maintained and capitalized Taxis with outsourced labor has been a viable model for a long time, it lets them control/restrict the operating environment (important to avoid those bad edge cases!), and lets them continue to gather more and more data to identify and address the statistical outliers. They are still targeting areas with good climates and relatively sane driving environments because even with all their models and sensors, heavy snow/rain, icy roads, etc. are still a real problem. | | |
| ▲ | tialaramex 5 days ago | parent [-] | | This whole "But Waymo can't work in bad climates" thing is very dubious. At some point it is too dangerous to be driving an automobile. "But Waymo should also be dangerous" is the wrong lesson. When the argument was Phoenix is too pleasant I could buy that. Most places aren't Phoenix. But SF and LA are both much more like a reasonable place other humans live. It rains, but not always, it's misty, but not always. Snow I do accept as a thing, lots of places humans live have some snow, these cities don't really have snow. However for ice when I watch one of those "ha, most drivers can make this turn in the ice" videos I'm not thinking "I bet Waymo wouldn't be able to do this" I'm thinking "That's a terrible idea, nobody should be attempting it". There's a big difference between "Can it drive on a road with some laying snow?" and "Can it drive on ice?". | | |
| ▲ | lazide 5 days ago | parent [-] | | You know how I can tell you haven’t actually lived in a bad climate? Both SF and LA climates are super cushy compared to say, Northern Michigan. Or most of the eastern seaboard. Or even Kansas, Wyoming, etc. in the winter. In those climates, if you don’t drive in what you’re calling ‘nobody should be attempting it’ weather, you - starve to death in your house over the winter. Because many months are just like that. Self driving has a very similar issue with the vast majority of, say, Asia. Because similarly “this is crazy, no one should be driving like this conditions” is the norm. So if it can’t keep up, it’s useless. Eastern and far Northern Europe has a lot of kinda similar stuff going on. Self driving cars are easy if you ignore the hard parts. In India, I’ve had to deal with Random Camel, missing (entire) road section that was there yesterday, 5 different cars in 3 lanes (plus 3 motorcycles) all at once, many cattle (and people) wandering in the road at day and night, and the so common it’s boring ‘people randomly going the wrong way on the road’. If you aren’t comfortable bullying other drivers sometimes to make progress or avoid a dangerous situation, you’re not getting anywhere anytime soon. All in a random mix of flooding, monsoon rain, super hot temperatures, construction zones, fog, super heavy fireworks smoke, etc. etc. Hell, even in the US I’ve had to drive through wildfires and people setting off fireworks on the road (long story, safety reasons). The last thing I would have wanted was the car freezing or refusing. Is that super safe? Not really. But life is not super safe. And a car that won’t help me live my life is useless to me. Such an AI would of course be a dangerous asshole on, say, LA roads, of course. Even more than the existing locals. | | |
| ▲ | tialaramex 5 days ago | parent [-] | | This idea that they're somehow ignoring the hard parts is very silly. The existing human drivers in San Francisco manage to kill maybe 20 or so people per year so apparently it's not so "easy" that the human drivers can do it without killing anybody. I live in the middle of a city, so, no, in terrible weather just like great weather I walk to the store, no need to "starve to death" even if conditions are too treacherous for people to sensibly drive cars. Because I'm an old man, and I used to live somewhere far from a city, I have had situations where you can't use a car to go fetch groceries because even if you don't care about safety the car can't go up an icy hill, it loses traction, gravity takes over, you slide back down (and maybe wreck the car). | | |
| ▲ | lazide 5 days ago | parent [-] | | So why do you think they’re only those cities? Because I’m hearing nothing from you that goes beyond ‘nuh uh’ so far. Because as an old man who has actually lived in all these places - and also has ridden in Waymos before and has had friends on the Waymo team in the past, your comments seem pretty ridiculous. | | |
| ▲ | tialaramex 5 days ago | parent [-] | | Unlike Phoenix the choice of SF and LA seems to me like a PR choice. SF is where lots of tech nerds live and work, LA is one half of the country's media. I'd imagine that today if you're at all interested in this stuff and live in LA or SF you have ridden Waymo whereas when it was in a Phoenix suburb that's a very niche thing to go do unless you happened to live there. A lot of the large population centres in the US are in these what you're calling "super cushy" zones where there's not much snow let alone ice. More launches in cities in Florida, Texas, California will address millions more people but won't mean more ice AFAIK. So I guess for you the most interesting announcement is probably New York, since New York certainly does have real snow. 2026 isn't that long, although I can imagine that maybe a President who thinks he's entitled to choose the Mayor of New York could mess that up. As to the "But people in some places are crazy drivers" I saw that objection from San Francisco before it was announced. "Oh they'll never try here, nobody here drives properly. Can you imagine a Waymo trying to move anywhere in the Mission?". So I don't have much time for that. |
|
|
|
|
| |
| ▲ | davidhs 5 days ago | parent | prev | next [-] | | > Yes machine learning vision systems hallucinate, but so do humans. When was the last time you had full attention on the road and a reflection of light made you super confused and suddenly drive crazy? When was the last time you experienced objects behaving erratically around you, jumping in and out of place, and perhaps morphing? | | |
| ▲ | hodgesrm 5 days ago | parent | next [-] | | Well there is strong anecdotal evidence of exactly this happening. We were somewhere around Barstow on the edge of the desert when the drugs began to take hold. I remember saying something like, “I feel a bit lightheaded; maybe you should drive . . .”And suddenly there was a terrible roar all around us and the sky was full of what looked like huge bats, all swooping and screeching and diving around the car, which was going about 100 miles an hour with the top down to Las Vegas. And a voice was screaming: “Holy Jesus! What are these goddamn animals?” [0]
[0] Thompson, Hunter S., „Fear and Loathing in Las Vegas“ | | |
| ▲ | fipar 5 days ago | parent [-] | | Hopefully we can expect FSD systems not to act like humans on hallucinogens though, right? :) | | |
| ▲ | hodgesrm 4 days ago | parent [-] | | One hopes so. Many of the comments assume an ideal human driver, whereas real human drivers are frequently tired, distracted, intoxicated, or just crazy. |
|
| |
| ▲ | ben_w 5 days ago | parent | prev [-] | | Many accidents are caused by low-angle light dazzle. It's part if why high beams aren't meant to be used off a dual carriageway. When was the last time you saw a paper bag blown across the street and mistook it for a cat or a fox? (Did you even notice your mistake, or do you still think it was an animal?) Do you naturally drive faster on wide streets, slower on narrow streets, because the distance to the side of the road changes your subconcious feeling of how fast you're going? Do you even know, or are you limited to your memories rather than a dashcam whose footage can be reviewed later? etc. Now don't get me wrong, AI today is, I think, worse than humans at safe driving; but I'm not sure how much of that is that AI is more hallucinate-y than us vs. how much of it is that human vision system failures are a thing we compensate for (or even actively make use of) in the design of our roads, and the AI just makes different mistakes. | | |
| ▲ | davidhs 5 days ago | parent [-] | | If the internal representation of Tesla Autopilot is similar to what the UI displays, i.e. the location of the w.r.t. to everything else, and we had a human whose internal representation is similar, everything jumping around in consciousness, we’d be insane to allow him to drive. Self-driving is probably “AI-hard” as you’d need extensive “world knowledge” and be able to reason about your environment and tolerate faulty sensors (the human eyes are super crappy with all kinds of things that obscure it, such as veins and floaters). Also, if the Waymo UI accurately represents what it thinks is going on “out there” it is surprisingly crappy. If your conscious experience was like that when you were driving you’d think you had been drugged. | | |
| ▲ | ben_w 5 days ago | parent [-] | | I agree that if Tesla's representation of what their system is seeing is accurate, it's a bad system. The human brain's vision system makes pretty much the exact opposite mistake, which is a fun trick that is often exploited by stage magicians: https://www.youtube.com/watch?v=v3iPrBrGSJM&pp And is also emphasised by driving safety awareness videos: https://www.youtube.com/watch?v=LRFMuGBP15U I wonder what we'd seem like to each other, if we could look at each other's perception as directly as we can look at an AI's perception? Most of us don't realise how much we mispercieve because it doesn't feel different in the moment to percieve incorrectly; it can't feel different in the moment, because if it did, we'd notice we were mispercieving. |
|
|
| |
| ▲ | ethbr1 5 days ago | parent | prev | next [-] | | > Anyway, I don't think Tesla were wrong to not use LIDAR - they had good reasons to not go down that route. They were excessively expensive and the old style spinning LIDARs were not robust. You could not have sold them on a production car in 2018. The correct move for Tesla would have been to split the difference and add LIDAR to some subset of their fleet, ideally targeted in the most difficult to debug environments. Somewhat like Google/Waymo are doing with their Jaguars. Don't LIDAR 100% of Teslas, but add it to >0%. | | |
| ▲ | ACCount37 5 days ago | parent [-] | | Tesla did, in fact, use "ground truth vehicles" - vehicles that were owned and operated by Tesla itself, and had high performance LIDARs installed. They were used to collect the data to train the "vision-only" system and verify its performance. Reportedly, they no longer use this widely - but they still have some LIDAR-equipped "scout vehicles" they send into certain environments to collect extra data. | | |
| ▲ | ethbr1 4 days ago | parent [-] | | It seems like an own goal not to sell these to some interested and targeted customers then. | | |
| ▲ | ACCount37 4 days ago | parent [-] | | Who would buy those and why? They don't use LIDARs for better self-driving somehow. They're just data harvesting units with wheels. And I don't think there's a large and underserved market for LIDARs on wheels. | | |
| ▲ | ethbr1 4 days ago | parent [-] | | > Who would buy those and why? [...] They're just data harvesting units with wheels. Tesla would subsidize them and offer them at the same price as non-LIDAR models, to select customers in target areas. And yes, you answered the second part of your own question. |
|
|
|
| |
| ▲ | marcos100 5 days ago | parent | prev | next [-] | | I want my self-driving car to be a better driver than any human. Sure we can drive without LIDAR, but just look up the amount of accidents caused by humans. | | |
| ▲ | paulryanrogers 5 days ago | parent [-] | | Humans cause one fatal accident per million miles. (They have no backup driver they can disengage to.) Now just look up how many disengagements per million miles Tesla has. | | |
| ▲ | Eisenstein 5 days ago | parent [-] | | Can you make your point without the stat, or provide the stat for us please? |
|
| |
| ▲ | lukeschlather 5 days ago | parent | prev | next [-] | | I had taken for granted that the cameras in the Tesla might be equivalent to human vision, but now I'm realizing that's probably laughable. I'm reading it's 8 cameras at 30fps and it sounds like the car's bus can only process about 36fps (so a total of 36fps, not 8x30 = 240fps theoretically available from the cameras, if they had a better memory bus.) It also seems plausible you would need at least 10,000 FPS to fully match human vision (especially taking into account that humans turn their heads which in a CV situation could be analogous to the CV algorithm having 32x30 = 960 FPS, but typically only processing 140 frames this second from cameras pointing in a specific direction. So maybe LIDAR isn't necessary but also if Tesla were actually investing in cameras with a memory bus that could approximate the speed of human vision I doubt it would be cheaper than LIDAR to get the same result. | | |
| ▲ | tialaramex 5 days ago | parent | next [-] | | Mostly human vision is just violently different from a camera, but you could interpret that as a mix of better and worse. One of the ways it's better is that humans can sense individual photons. Not 100% reliably, but pretty well, which is why humans can see faint stars on a dark night without any special tools even though the star is thousands of light years away. On the other hand, our resolution for most of our field of vision is pretty bad - this is compensated for by changing what we're looking it when we care about details we can just look directly at it and the resolution is better right in the centre of the picture. | |
| ▲ | asats 5 days ago | parent | prev [-] | | Also the human vision is backed by the general intelligence, which those cameras are very much not. |
| |
| ▲ | DennisP 5 days ago | parent | prev | next [-] | | You might not have the classical guardrails, but you are providing the neural net with a lot more information. Even humans are starting to find it useful to get inputs from other sensor types in their cars. I agree that Tesla may have made the right hardware decision when they started with this. It was probably a bad idea to lock themselves into that path by over-promising. | |
| ▲ | phinnaeus 5 days ago | parent | prev | next [-] | | Humans have the most sophisticated processing unit in the known universe to handle the data from the eyes. Is the brain a crutch? | | |
| ▲ | bayindirh 5 days ago | parent [-] | | At least for one marine creature, which I forgot its name, the answer is yes. Said creature dissolves its brain the moment it can find a place to attach and call home. | | |
| ▲ | shagie 4 days ago | parent | next [-] | | Sea squirt. One of the simplest members of Chordata. https://en.wikipedia.org/wiki/Ascidiacea | |
| ▲ | chronogamous 5 days ago | parent | prev [-] | | Can't think of the name atm either, but I'm pretty sure it only does so, as it would be pointless to make any further decisions after attaching itself - it simply has no means to act on anything after that... the attaching is the only thing it 'does' in it's life... after that, it's only job, and only ability, is to be. Chose the wrong spot to attach and call home? Brains wouldn't make a bit of difference (unless regretting it's one life-choice is somehow usefull during this stage of just being, being stuck on the spot). |
|
| |
| ▲ | uoaei 5 days ago | parent | prev | next [-] | | This impulse to limit robots to the capacities, and especially the form factors, of humans has severely limited our path to progress and a more convenient life. Robots are supposed to make up for our limitations by doing things we can't do, not do the things we can already do, but differently. The latter only serves to replace humans, not augment them. | |
| ▲ | DonHopkins 5 days ago | parent | prev | next [-] | | I'd rather cars have crutches than the people they run over. | |
| ▲ | fluidcruft 5 days ago | parent | prev | next [-] | | Musk's argument "Humans don't have LIDAR, therefore LIDAR is useless" has always seemed pretty dumb to me. It ignores the possibility that LIDAR might be superhuman with superhuman performance. And we also know you can get superhuman performance on certain tasks with insect-scale brains. Musk's just spewing stoner marketing crap that stoners think is deep, not actual engineering savvy. (and that's not even addressing that human vision is fundamentally a weird sensory mess full of strange evolutionary baggage that doesn't even make sense except for genetic legacy) | | |
| ▲ | mixedbit 5 days ago | parent [-] | | Musk's argument also ignores intelligence of humans. The worst case upper bound for reaching human level driving performance without LIDAR is for AI to reach human level intelligence. Perhaps it is not required, but until we see self-driving Teslas performing as well as humans, we won't know this. Worst case scenario is that Tesla unsupervised self-driving is as far away as AGI. |
| |
| ▲ | maxerickson 5 days ago | parent | prev | next [-] | | You could write a rant like this about 4 vs 3 wheels. | |
| ▲ | inciampati 5 days ago | parent | prev | next [-] | | I wish I had radar eyes | | |
| ▲ | UltraSane 5 days ago | parent [-] | | I want to see gamma rays, I want to hear X-rays, and I want to smell dark matter. | | |
| ▲ | RaftPeople 5 days ago | parent [-] | | "I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate". |
|
| |
| ▲ | ModernMech 5 days ago | parent | prev [-] | | > Vision systems were improving a lot back then so the idea you could have a FSD on vision alone was plausible. This was only plausible to people who had no experience in robotics, autonomy, and vision systems. Everyone knew LIDAR was the enabling technology thanks to the 2007 DARPA Urban challenge. But the ignoramus Elon Musk decided he knew better and spent the last decade+ trashing the robotics industry. He set us back as far as safety protocols in research and development, caused the first death due to robotic cars, deployed them on public roads without the consent of the public by hoisting around his massive wealth, lied consistently for a DECADE about the capabilities of these machines, defrauded customers and shareholders while becoming richer and richer, all to finally admit defeat while he still maintains the growth story of for Tesla's future remains in robotics. The nerve of this fucking guy. |
|
|
| ▲ | zpeti 5 days ago | parent | prev [-] |
| If a human brain can tell the difference between sun glare and an object, machine learning certainly can. It’s already better at X-rays and radiology in many cases. Everything you are talking about is just a matter of sufficient learning data and training. |
| |
| ▲ | audunw 5 days ago | parent | next [-] | | 1. A human has a lot more options to deal with things like sun glare. We can move our head, use shade, etc. And when it comes to certain aspects around dynamic range the human eyes are still better than cameras. And most of all, if we loose nearly all vision we are intelligent enough to simulate the behaviour of most objects around us to react safe for the next few seconds.
2. Human intelligence is much deeper than machine vision. We can predict a lot of things that machine visions have no hope to achieve without some kind of incredibly advanced multi-modal model which is probably many years out. The most important thing is that Tesla/Elon absolutely had no way to know, and no reason to believe (other than as a way to rationalise a dangerously risky bet) that machine vision would be able to solve all these issues in time to make good on their promise. | | |
| ▲ | mcv 5 days ago | parent [-] | | Not only do we have options to deal with it, we understand that it's a vision artefact, and not something real. We understand objects don't vanish or appear out of nowhere. We understand the glare isn't reality but is obstructing our view of reality. We immediately understand we're therefore dealing with incomplete information and compensate for that. Including looking for other ways to look around the instruction or fill in the gaps. Without even thinking about it, often. |
| |
| ▲ | tsimionescu 5 days ago | parent | prev | next [-] | | The human brain is the result of literal billions of years of evolution, across trillions of organisms. The "just" in your "just a matter of sufficient learning data and training" is doing a lot of work. | | |
| ▲ | RaftPeople 5 days ago | parent [-] | | And the techniques our brain uses to generalize during learning appear to be orders of magnitude better than current ML methods. |
| |
| ▲ | jihadjihad 5 days ago | parent | prev | next [-] | | This comment is a perfect illustration of the hubris of this technology in general. | |
| ▲ | threatofrain 5 days ago | parent | prev | next [-] | | If you have cheat codes then why not just use it instead of insisting on principle that our eyes are good enough? We see Waymo use the cheat codes, oh no. We also only have binocular vision, so I guess Tesla is already okay with superhuman cheat codes. | |
| ▲ | tomasphan 5 days ago | parent | prev | next [-] | | We not only use our vision when driving but also our other senses. We can tell the sun is shining at us because it warms our skin. This all happened subconsciously.
Humans are vastly superior drivers in general, it’s just that 50% of humans are bad drivers. | |
| ▲ | stevage 5 days ago | parent | prev | next [-] | | It's a big if, no? Humans do struggle with sun glare. It'd be great if cars were much better. | |
| ▲ | 5 days ago | parent | prev [-] | | [deleted] |
|