| ▲ | Rover222 4 days ago |
| Waymo operates on guardrails, with a lot more human-in-the-loop (remotely) help than most people seem aware of. Tesla's already have similar capabilities, in a much wider range of roads, in vehicles that cost 80% less to manufacture. They're both achieving impressive results. But if you read beyond headlines, Tesla is setup for such more more success than Waymo in the next 1-2 years. |
|
| ▲ | JumpCrisscross 4 days ago | parent | next [-] |
| > Tesla is setup for such more more success than Waymo in the next 1-2 years Iff cameras only works. With threshold for "works" beig set by Waymo, since a Robotaxi that's would have been acceptable per se may not be if it's statistically less safe compared to an existing solution. Waymo also sets the timeline. If cameras only would work, but Waymo scales before it does, Tesla may be forced by regulators to integrate radars and lidars. This nukes their cost advantage, at least in part, though Tesla maintains its manufacturing lead and vertical integration.) Tesla has a good hand. But Rivian's play makes sense. If cameras only fails, they win on licensing and a temporary monopoly. If cameras only work, they are a less-threatening partner for other car companies than Waymo. |
| |
| ▲ | simondotau 3 days ago | parent [-] | | In the increasingly rare instances where Tesla's solution is making mistakes, it's pretty much never to do with a failure of spatial awareness (sensing) but rather a failure of path planning (decision-making). The only thing LIDAR can do sense depth, and if it turns out sensing depth using cameras is a solved problem, adding LIDAR doesn't help. It can't read road signs. It can't read road lines. It can't tell if a traffic light is red or green. And it certainly doesn't improve predictions of human drivers. | | |
| ▲ | dzhiurgis 3 days ago | parent | next [-] | | Which begs me the question why Tesla took so long to get here? It's only since v12 it starting to look bearable for supervised use. The only answer I see is their goal to create global model that works in every part of the world vs single city which is vastly more difficult. After all most drivers really only know how to drive well in their own town and make a lot of mistakes when driving somewhere else. | | |
| ▲ | Rover222 3 days ago | parent | next [-] | | It was only about 2 years ago that they switched from hard coded logic to machine learning (video in, car control out), and this was the beginning of their final path they are committed to now. (building out manufacturing for Cybercab while still finalizing the FSD software is a pretty insane risk that no other company would take) | | |
| ▲ | dzhiurgis 3 days ago | parent [-] | | That’s the switch for controls, the machine vision was nn from the start. |
| |
| ▲ | simondotau 3 days ago | parent | prev [-] | | Path planning (decision-making) is by far the most complicated part of self-driving. Waymo vehicles were making plenty of comically stupid mistakes early on, because having sufficient spatial accuracy was never the truly hard part. |
| |
| ▲ | KeplerBoy 3 days ago | parent | prev | next [-] | | Sensing depth is pretty important though. Especially in scenarios where vision fails, radar for example works perfectly fine in the thickest of fog. | | |
| ▲ | simondotau 3 days ago | parent [-] | | In "scenarios where vision fails" the car should not be driving. Period. End of story. It doesn't matter how good radar is in fog, because radar alone is not enough. | | |
| ▲ | KeplerBoy 3 days ago | parent [-] | | Too bad conditions can change instantly. You can't stop the car at an alpine tunnel exit just because there's heavy fog on the other side of the mountain. | | |
| ▲ | simondotau 3 days ago | parent [-] | | If the fog is thick enough that you literally can't see the road, you absolutely can and should stop. Most of the time there's still some visibility through fog, and so your speed should be appropriate to the conditions. As the saying goes, "don't drive faster than your headlights." |
|
|
| |
| ▲ | ra7 3 days ago | parent | prev [-] | | > The only thing LIDAR can do sense depth This is absolutely false. LiDAR is used heavily in object detection. There’s plenty of literature on this. Here’s a few from Waymo: https://waymo.com/research/streaming-object-detection-for-3-... https://waymo.com/research/lef-late-to-early-temporal-fusion... https://waymo.com/research/3d-human-keypoints-estimation-fro... In fact, LiDAR is a key component for detecting pedestrian keypoints and pose estimation. See https://waymo.com/blog/2022/02/utilizing-key-point-and-pose-... Here’s an actual example of LiDAR picking up people in the dark well before cameras: https://www.reddit.com/r/waymo/s/U8eq8BEaGA Not to mention they’re also highly critical for simulation. > It can't read road signs. It can't read road lines. Also false. Here’s Waymo’s 5th-gen LiDAR raw point clouds that can even read a logo on a semi truck: https://youtube.com/watch?v=COgEQuqTAug&t=11600s It seems you’re misinformed about how this sensor is used. The point clouds (plus camera and radar data) are all fed to the models for detection. That makes their detectors much more robust in different lighting and weather conditions than cameras alone. | | |
| ▲ | Rover222 3 days ago | parent [-] | | I think "sensing depth" and "object detection" are the same things in this debate though | | |
| ▲ | ra7 3 days ago | parent [-] | | It's just "sensing depth" the same way cameras provide just "pixels". A fused cameras+radars+lidar input provides more robust coverage in a variety of conditions. | | |
| ▲ | simondotau 2 days ago | parent [-] | | You know it would be even more robust under even more conditions? Putting 80 cameras and 20 LIDAR sensors on the car. Also a dozen infrared heat sensors, a spectrophotometer, and a Doppler radar. More is surely always better. Waymo should do that. | | |
| ▲ | ra7 2 days ago | parent [-] | | Maybe Tesla should reduce their camera count from 8 to 2 and put them on a swivel like human eyes. Less is surely always better. I can also make “clever” arguments that are useless. | | |
| ▲ | simondotau a day ago | parent [-] | | Remarkable. You managed to both misunderstand my point and, in drafting your witty riposte, accidentally understand it and adopt it as your own. More isn't objectively better, less isn't objectively better. There's only different strategies and actual real world outcomes. | | |
| ▲ | ra7 a day ago | parent [-] | | > More isn't objectively better, less isn't objectively better. Great, you finally got there. All it took was one round of correcting misinformation about LiDAR and another round of completely useless back-and-forth about sensor count. The words you’re looking for are necessary and sufficient. Cameras are necessary, but not sufficient. > There's only different strategies and actual real world outcomes. Thanks for making my point. Actual real world outcomes are exactly what matter: 125M+ fully autonomous miles versus 0 fully autonomous miles. | | |
| ▲ | simondotau a day ago | parent [-] | | Oh I’m sorry, I didn’t realise you think you’re in a battle of fanboy talking points. Never mind. Not interested. | | |
| ▲ | ra7 a day ago | parent [-] | | Highly ironic considering you started this comment chain with a bunch of fanboy talking points and misinformation. Clearly, you’re not interested in being factual. Bye. |
|
|
|
|
|
|
|
|
|
|
|
| ▲ | ra7 4 days ago | parent | prev [-] |
| Tesla literally has a human in the driver seat for each and every mile. Their robotaxi which operates on geofenced “guardrails” has a human in the driver seat or passenger seat depending on area of its operation, and also has active remote supervision. That’s direct supervision 100% of the time. It is in no way similar in capability to Waymo. We’ve been hearing Tesla will “surpass Waymo in the next 1-2 years” from the past 8 years, yet they are nowhere close. It’s always future tense with Tesla and never about the current state. |