| ▲ | lmeyerov 3 hours ago | ||||||||||||||||
We have been getting increasingly hit by this. We do defense, not offense, and AI refusals to run defense prompts has been going noticeably up. Historically, tasks used to only get randomly rejected when we were doing disaster management AI, so this is a surprise shift in refusals to function reliably for basic IT. Related, they outsourced the TAP verification to a terrible vendor, and their internal support process to AI, so we are now in fairly busted support email threads with both and no humans in sight. This all feels like an unserious cybersecurity partner. | |||||||||||||||||
| ▲ | intended 3 hours ago | parent [-] | ||||||||||||||||
They are selling an impossible product. If you make an LLM more safe, you are going to shift the weight for defensive actions as well. There’s no physical way to assign weights to have one and not the other. | |||||||||||||||||
| |||||||||||||||||