| ▲ | godelski 5 days ago | |
I should have taken more care to link a article, but I was trying you link something more clear.But mind you, everything Waymo does is under research. So let's look at something newer to see if it's been incorporated
They also go on to explain model distillation. Read the whole thing, it's not longhttps://waymo.com/blog/2025/12/demonstrably-safe-ai-for-auto... But you could also read the actual research paper... or any of their papers. All of them in the last year are focused on multimodality and a generalist model for a reason which I think is not hard do figure since they spell it out | ||
| ▲ | theamk 5 days ago | parent [-] | |
Note this is not end-to-end... All that VLM can do is to "contribute a semantic signal". So put a fake "detour" sign, so the vehicle thinks it's a detour and starts to follow? Possible. But humans can be fooled like this too. Put a "proceed" sign so the car runs over the pedestrian, like that article proposes? Get car to hit a wall? Not going to happen. | ||