| ▲ | benjiro 3 hours ago | |
Camera's are not the issue, they are dirt cheap. Its the amount of progressing power to combine that output. You can put 360 degree camera's on your car like BYD does, and have Lidar. But you simply use the lidar for the heavy lifting, and use a more lighter model for basic image recognition like: lines on the road/speed plates/etc ... The problem with Tesla is, that they need to combine the outputs of those camera's into a 3d view, what takes a LOT more processing power to judge distances. As in needing more heavy models > more GPU power, more memory needed etc. And still has issues like a low handing sun + white truck = lets ram into that because we do not see it. And the more edge cases you try to filter out with cameras only setups, the more your GPU power needs increase! As a programmer, you can make something darn efficient but its those edge cases that can really hurt your programs efficiency. And its not uncommon to get 5 to 10x performance drops, ... Now imagine that with LLM image recognition models. Tesla's camera only approach works great ... under ideal situations. The issue is those edge cases and not ideal situations. Lidar deals with a ton of edge cases and removes a lot of the progressing needed for ideal situations. | ||