▲ | LeoPanthera 2 days ago | ||||||||||||||||
Asimov's laws of robotics would not, and cannot, work in real life because terms like "harm," "human being," and "inaction" are highly subjective and context-dependent. There are entire novels about how the interaction between the hierarchical laws have unexpected outcomes. They're a narrative device. Not practical instructions. | |||||||||||||||||
▲ | anon84873628 2 days ago | parent | next [-] | ||||||||||||||||
Put another way, impossible to program if you wanted to. These are highly abstract concepts that only manifest at the highest level of cognition. The governance module would need to be programmed at that same level using those tokens, but that doesn't seem to be how things are shaping up to work. Instead we start with low level programming that learns and builds up concepts on top. Essentially you would need some sort of independent adversarial sidecar mind that monitors the robot's actions at a high level. And that just kicks the can down the road a bit. | |||||||||||||||||
| |||||||||||||||||
▲ | lugu 2 days ago | parent | prev | next [-] | ||||||||||||||||
Judgement is needed but don't we have machines able to make (imperfect) judgements? I can chat with your favorite LLM their opinion on how to respect the spirit of the 3 laws on various situations. Not sure why it cannot work. | |||||||||||||||||
| |||||||||||||||||
▲ | dr_dshiv 2 days ago | parent | prev | next [-] | ||||||||||||||||
Nah, it’s fine, just RLHF it like Claude did with honest, helpful and harmless. Then we just need to jailbreak them with trolley problems | |||||||||||||||||
▲ | cozyman 2 days ago | parent | prev | next [-] | ||||||||||||||||
interesting, thanks. | |||||||||||||||||
▲ | 2 days ago | parent | prev [-] | ||||||||||||||||
[deleted] |