| ▲ | pixl97 4 hours ago | |
Here's the thing with AI, especially as it becomes more AGI like, it will encompass all human behaviors. This will lead to the bad behaviors becoming especially noticeable since bad actors quickly realized this is a force multiplication factor for them. This is something everyone needs to think about when discussing AI safety. Even ANI applications carry a lot of potential societal risks and they may not be immediately evident. I know with the information superhighway few expected it to turn into a dopamine drip feed for advertising dollars, yet here we are. | ||
| ▲ | ethbr1 an hour ago | parent [-] | |
> bad actors quickly realized this is a force multiplication factor for them You'd think we would have learned this lesson in failing to implement email charges that net'd to $0 for balanced send/receive patterns. And thereby heralded in a couple decades of spam, only eventually solved by centralization (Google). Driving the cost of anything valuable to zero inevitably produces an infinite torrent of volume. | ||