| ▲ | EagnaIonat 12 hours ago | |||||||||||||
> The whole notion of "AI safety regulations" is so silly and misguided. Here is a couple of real world AI issues that have already happened due to the lack of AI Safety. - In the US if you were black you were flagged "high risk" for parole. If you were a white person living in farmland area then you were flagged "low risk" regardless of your crime. - Being denied ICU because you are diabetic. (Thankfully that never went into production) - Having your resume rejected because you are a woman. - Having black people photos classified as "Gorilla". (Google couldn't fix at the time and just removed the classification) - Radicalizing users by promoting extreme content for engagement. - Denying prestige scholarships to black people who live in black neighbourhoods. - Helping someone who is clearly suicidal to commit suicide. Explaining how to end their life and write the suicide note for them. ... and the list is huge! | ||||||||||||||
| ▲ | nradov 6 hours ago | parent | next [-] | |||||||||||||
None of those are specifically "AI" issues. The technology used is irrelevant. In most cases you could cause the same bias problems with a simple linear regression model or something. Suicide techniques and notes are already widely available. | ||||||||||||||
| ||||||||||||||
| ▲ | mx7zysuj4xew 9 hours ago | parent | prev [-] | |||||||||||||
these issues are inherently some of the uglier sides of humananity. no LLM safety program can fix them, since its holding up a mirror to society. | ||||||||||||||