▲ | ramoz 5 days ago | |||||||
We need to be integrated into the runtime such that an agent using it's arms is incapable of even doing such a destructive action. If we bet on free will with a basis that machines somehow gain human morals, and if we think safety means figuring out "good" vs "bad" prompts - we will continue to feel the impact of surprise with these systems, evolving in harm as their capabilities evolve. tldr; we need verifiable governance and behavioral determinism in these systems. as much as, probably more than, we need solutions for prompt injections. | ||||||||
▲ | bee_rider 4 days ago | parent [-] | |||||||
The evil behavior of taking all my stuff outside… now we’ll have a robot helper that can’t help us move to another house. | ||||||||
|