| ▲ | blahgeek 14 hours ago |
| If human is at, say, 80%, it’s still a win to use AI agents to replace human workers, right? Similar to how we agree to use self driving cars as long as it has less incidents rate, instead of absolute safety |
|
| ▲ | FatherOfCurses 12 minutes ago | parent | next [-] |
| Oh yeah it's a blast for the human workers getting replaced. It's also amazing for an economy predicated on consumer spending when no one has disposable income anymore. |
|
| ▲ | harry8 14 hours ago | parent | prev | next [-] |
| > we agree to use self driving cars ... Not everyone agrees. |
| |
| ▲ | Terr_ 9 hours ago | parent | next [-] | | I like to point out that the error-rate is not the error-shape. There are many times we can/should prefer a higher error rate with errors we can anticipate, detect, and fix, as opposed to a lower rate with errors that are unpredictable and sneaky and unfixable. | |
| ▲ | a3w 6 hours ago | parent | prev [-] | | Yes, let's not have cars. Self-driving ones will just increase availability and might even increase instead of reduce resource expenditure, except for the metric of parking lots needed. |
|
|
| ▲ | wellf 13 hours ago | parent | prev | next [-] |
| Hmmm. Depends. Not all unethicals are equal. Automated unethicalness could be a lot more disruptive. |
| |
| ▲ | jstummbillig 12 hours ago | parent [-] | | A large enough cooperation or institution is essentially automated. Its behavior is what the median employer will do. If you have a system to stop bad behavior, then that's automated and will also safeguard against bad AI behavior (which seems to work in this example too) |
|
|
| ▲ | rzmmm 14 hours ago | parent | prev [-] |
| The bar is higher for AI in most cases. |