| ▲ | ceejayoz 13 hours ago | |||||||
It may surprise you, but a four week jury trial covers a few more bases than a short article can fully detail. That said, this definitely has an answer: https://www.courthousenews.com/in-sexual-assault-trial-uber-... > When matching drivers with riders, Uber uses an AI-powered safety feature called the safety ride assistant dispatch, or SRAD. SRAD gives potential driver-rider matches a score from 0 to 1 based on potential for sexual assault and aims to make matches with the lowest risk. | ||||||||
| ▲ | tpmoney 5 hours ago | parent [-] | |||||||
The article also says that Uber sets various thresholds around this already and that their system flagged it at a score that was "higher than the late night average". What it doesn't tell us is what the threshold is/was for Pheonix, or how that threshold compares to other cities, or even how much higher the score was over the "average". Maybe their threshold for canceling a ride is 0.85, and the late night average is 0.8 in this system. So 0.81 puts the driver over the late night average as per the article and under the threshold for canceling the ride. Your email provider has systems for detecting spam and removing it from your email. If an email comes into their system and falls under the threshold for being declared spam, but is over the average spam rating for emails in your account, have they done something wrong by allowing it through if it's spam? What if it wasn't spam and they removed it? These sorts of headlines that espouse a "they knew something and so therefore they are liable" viewpoint seem to me to be more likely to result in companies not building safety measurement systems, or at a minimum not building proactive systems, so that they can avoid getting dragged and blamed for an assault because they chose thresholds that didn't prevent the assault. And not all measurement systems are granular enough or reliable enough to be exposed to end users. Imagine if they built a system that determined that if your driver was from a low income part of town and the passenger lived in a high income part of down the chance of an assault was "higher than the late night average". How long would it be before we saw a different lawsuit alleging that Uber discriminated against minority drivers by telling affluent white passengers that their low income minority drivers were "more likely than average" to assault them? I would hope that this verdict was reached on stronger reasoning than "they had an automated number and didn't say anything" but if it did, none of the articles so far have said what that reasoning was. | ||||||||
| ||||||||