| ▲ | cjs_ac a day ago | |||||||||||||
At some point, a publicly-listed company will go bankrupt due to some catastrophic AI-induced fuck-up. This is a massive reputational risk for AI platforms, because ego-defensive behaviour guarantees that the people involved will make as much noise as they can about how it's all the AI's fault. | ||||||||||||||
| ▲ | meibo a day ago | parent | next [-] | |||||||||||||
That will never happen, AI cannot be allowed to fail, so we'll be paying for that AI bail-out. | ||||||||||||||
| ▲ | ramon156 a day ago | parent | prev | next [-] | |||||||||||||
Do you really want these kind of companies to succeed? Let them burn tbh | ||||||||||||||
| ||||||||||||||
| ▲ | gosub100 21 hours ago | parent | prev [-] | |||||||||||||
I see the inverse of that happening: every critical decision will incorporate AI somehow. If the decision was good, the leadership takes credit. If something terrible happens, blame it on the AI. I think it's the part no one is saying out loud. That AI may not do a damn useful thing, but it can be a free insurance policy or surrogate to throw under the bus when SHTF. | ||||||||||||||
| ||||||||||||||