▲ | aziaziazi a day ago | |
That’s true, however I think that story is interesting because is not mimicking real assistants behavior - most probably wouldn’t tell about the blackmail on the internet - but it’s more likely mimicking how such assistant would behave from someone else imagination, often intentionally biased to get one’s interest : books, movies, tv shows or forum commenter. As a society risk to be lured twice: - with our own subjectivity - by an LLM that we think "so objective because it only mimic" confirming our own subjectivity. | ||
▲ | neom a day ago | parent [-] | |
Got me thinking about why this is true, I started with "the AI is more brave than the real assistant" and then went into there, landed on: The human assistant is likely just able to better internalize a wide ranging fall out from an action, the LLM has no such fallout, and we are unaware of how widely it considered the consequences of it's actions? Does that seem right somehow? |