| ▲ | lazide 10 hours ago | |||||||
Even ‘true general intelligence’ (if we count humans as that) screws up frequently, sometimes (often?) intentionally for it’s own benefit - which is why accountability is such a necessary element. If someone won’t be held liable for the end result at some point, then there is no reason to ensure an even somewhat reasonable end result. It’s fundamental. Which is also why I suspect so many companies are pushing ‘AI’ so hard - to be able to do unreasonable things while having a smokescreen to avoid being penalized for the consequences. | ||||||||
| ▲ | hypeatei 10 hours ago | parent [-] | |||||||
> to be able to do unreasonable things while having a smokescreen Maybe, but I feel like the calculus remains unchanged for professions that already lack accountability (police, military, C-suite, three letter agencies, etc.); LLMs are yet another tool in their toolbox to obfuscate but they were going to do that anyway. Peons will continue to face consequences and sanctions if they screw up by using hallucinated output. | ||||||||
| ||||||||