| ▲ | pyrale an hour ago | |||||||
No, the point is that saying sorry because you're genuinely sorry is different from saying sorry because you expect that's what the other person wants to hear. Everybody does that sometimes but doing it every time is an issue. In the case of LLMs, they are basically trained to output what they predict an human would say, there is no further meaning to the program outputting "sorry" than that. I don't think the comparison with people with psychopathy should be pushed further than this specific aspect. | ||||||||
| ▲ | BoredPositron an hour ago | parent [-] | |||||||
You provided the logical explanation why the model acts like it does. At the moment it's nothing more and nothing less. Expected behavior. | ||||||||
| ||||||||