I think the key misalignment here is whether the output of an appropriately prompted LLM can ever be considered an “act of kindness”.
At least in this case, it’s indeed quite Orwellian.