Remix.run Logo
ActorNightly 5 days ago

>What if he’s not an idiot?

Lets evaluate that claim, by first defining what an idiot is, and then looking at his history, all the things he said, and all the things he has done.

Ill skip you the trouble of going through that process - he is very much an idiot.

Don't confuse the ability to throw money at something and make it work through sheer cash burn with actual intelligence.

terminalshort 5 days ago | parent [-]

You don't look for smart people by looking for people who don't do dumb things. Everybody does dumb things. You look for people who have done smart things. Idiots don't do smart things.

ActorNightly 5 days ago | parent [-]

No, you evaluate someone's intelligence by comparing what they claim to reality.

For example: https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2F1...

This statement alone disqualifies him to talk about anything self driving.

natch 4 days ago | parent [-]

I’ll just leave this here for others to look at since I assume you’ll find some excuse to dismiss it.

https://x.com/niccruzpatane/status/1960865882240115052?s=46&...

ActorNightly 4 days ago | parent [-]

I won't dismiss it, because Im not a conservative and my arguments aren't ideological. ill just tell you why its wrong from a technical level.

>Extra sensors add cost to the system, and more importantly complexity. They make the software task harder, and increase the cost of all the data pipelines. They add risk and complexity to the supply chain and manufacturing.

Yeah if you treat them like inputs to neural network and train on them. But you don't actually have to. Its like nobody bothered to open a book and read about Kalman filtering.

Meanwhile, they literally have guys that are data labelers and marking what a cone is in images. Sure seems efficient.

> Vision is necessary to the task (which almost all agree on) and it should also be sufficient as well.

Of course, except nobody in the space actually knows how to do vision self driving correctly. Hint - humans drive well not because of vision, but because we map vision to a 3d representation of the world on which we can run simulations on and figure out optimal path. The reason why we don't care what the obstacle on the road is for us not to drive over it is because we intrinsically understand the space that that object takes up and compute that a physical collision will occur which is bad.

So are never going to make self driving work with forward only passes from images to trajectory planning, unless you have a massive model running on 4 5090s in the cars that has seen so much data that it has most all scenarios built in.

>Sensors change as parts change or become available and unavailable. They must be maintained and software adapted to these changes.

Lmao what a pathetic statement. If you change out a lidar sensor and actually have anything that resembles sensor fusion it will still be way better than without. All you have to maintain is the little itty bitty piece of code that takes that sensor data and maps it to 3d space - all your existing sensor fusing software will just treat it as another input and use its contribution to generate a more accurate picture.

Im starting to think that these guys really just have no fucking idea how to do anything except run Pytorch.

Meanwhile, Waymo that is using lidar+cameras, winning market share, and is safer and more reliable.

> Having a fleet gathering more data is more important than having more sensors.

In theory, this is true. In practice, if you were to actually try to do forward only self driving, you need a compete set of data that represents crashes, and the thing is, not enough people crash irl. It really bothers me how all these figures in the self driving space obviously know what overfitting is (or maybe they don't lol), but fail to recognize that they are doing just that.

>Having to process LIDAR and radar produces a lot of bloat in the code and data pipelines.

Same as point one, and a stupid one. Sensor fusion with statistical methods like Kalman filtering is nothing new. You integrate the sensors, and even if they are off by a bit, the kalman filtering will take care of any noise or bias. Its been proven so many times over in literature in control theory, but yet these assholes think they are smarter than everyone because they do ML. Lol.

>Andrej predicts other companies will also drop these sensors in time.

At this point, given how Tesla self driving is doing after having a headstart on everyone, Id probably say that you are pretty dumb if you think he knows the space.

>Mapping the world and keeping it up to date is much too expensive.

How is it expensive when you can literally do the same thing Tesla is doing and just gather data from activity. The amount of changes of actual roads is going to be incredibly small. The point of the map isn't to plan routes assuming the map corresponds to reality, its to increase the accuracy because that map "sensor" also goes into sensor fusion.

There is a reason Karpathy left Tesla btw. He wanted to get out because he saw that the current way of doing things were never gonna work. You can't do vision only self driving with forward only neural nets, this should be obvious to anyone in the space right now. If Tesla had even a slight chance of winning, he would have stayed.

Of course he is a very positive spirited person that is never going to shit on his ex boss, and cause stock price to drop, so he is gonna come in and do these interview with Putins puppet, talk tech, and peace out to do his thing.

No go ahead and flag this comment because all of this is probably above your head and I have no idea what Im talking about, OBVIOUSLY.

natch 2 hours ago | parent [-]

Thanks for the actual reply! I was pleasantly surprised. You are really all in on Kalman filtering! I don’t doubt you, but I think we all should be cheering for Elon, since unlike legacy auto, he’s leading the charge to give less money to the likes of MBS. If his team can figure out FSD (maybe with your help!) that will be a very good thing.