Remix.run Logo
EagnaIonat 12 hours ago

> The whole notion of "AI safety regulations" is so silly and misguided.

Here is a couple of real world AI issues that have already happened due to the lack of AI Safety.

- In the US if you were black you were flagged "high risk" for parole. If you were a white person living in farmland area then you were flagged "low risk" regardless of your crime.

- Being denied ICU because you are diabetic. (Thankfully that never went into production)

- Having your resume rejected because you are a woman.

- Having black people photos classified as "Gorilla". (Google couldn't fix at the time and just removed the classification)

- Radicalizing users by promoting extreme content for engagement.

- Denying prestige scholarships to black people who live in black neighbourhoods.

- Helping someone who is clearly suicidal to commit suicide. Explaining how to end their life and write the suicide note for them.

... and the list is huge!

nradov 6 hours ago | parent | next [-]

None of those are specifically "AI" issues. The technology used is irrelevant. In most cases you could cause the same bias problems with a simple linear regression model or something. Suicide techniques and notes are already widely available.

542354234235 2 hours ago | parent | next [-]

>None of those are specifically "AI" issues. The technology used is irrelevant.

I mean, just because you could kill a million people by hand doesn't mean that a pistol, or an automatic weapon, or nuclear weapons aren't an issue, just an irrelevant technology. Guns in a home make suicide more likely simply because they are a tool that allows for a split-second action. "If someone really wants to do X, they will find a way" just doesn't map onto reality.

EagnaIonat 3 hours ago | parent | prev [-]

All of those are AI issues.

mx7zysuj4xew 9 hours ago | parent | prev [-]

these issues are inherently some of the uglier sides of humananity. no LLM safety program can fix them, since its holding up a mirror to society.