| ▲ | ceejayoz 13 hours ago | |
https://www.courthousenews.com/in-sexual-assault-trial-uber-... > When matching drivers with riders, Uber uses an AI-powered safety feature called the safety ride assistant dispatch, or SRAD. SRAD gives potential driver-rider matches a score from 0 to 1 based on potential for sexual assault and aims to make matches with the lowest risk. Risk factors include location and time of day, but SRAD also considers a driver’s weekend and nighttime request rate, scoring them as more risky because they may be more likely to be searching for easy victims. > The SRAD score for Dean’s trip with Turay was 0.81, which was higher than the late-night average for the Phoenix area. Uber said it never informed Dean of its risk assessment. “We did not, nor would it be practical to provide that information to riders,” Sunny Wong, Uber’s director of applied science, said in a deposition played for the jury earlier in the day. | ||
| ▲ | mc32 10 hours ago | parent [-] | |
That’s pretty wild. It absolutely makes sense that if you can, you would try to minimize rider (and driver) risk. I also agree I can’t see this ever being shown to a rider or driver. (that would expose them to other risk) That said, this system is a double edged sword. It allows you to provide safer services to your customers but it paradoxically also exposes you to another risk. So even though on the whole this system prevents many instances of violence, when it misses and it results in violence, it can come back to bite you. Implied is that if they didn’t have this system more violence on their services would happen, but because they don’t measure driver risk score, they wouldn’t be as liable. | ||