▲ | catlifeonmars a day ago | |||||||
I would argue the machine is not extremely good (at role playing being sentient), but more so that humans are extremely quick to attribute sentience to the machine after being shown a very small amount of evidence. The model breaks down after enough interaction. | ||||||||
▲ | OzFreedom a day ago | parent [-] | |||||||
I easily believe that it breaks down, and it seems that even inducing the blackmailing mindset is pretty hard and requires heavy priming. The problem is the law of big numbers. With many agents operating in many highly complex and poorly supervised environments, interacting with many people over a lot of interactions - coincidences and unlikely chain of events become more and more likely. So regarding sentience, a not so bright AI might mimic it better than we expect because it got lucky, and cause damage. | ||||||||
|