| ▲ | sanitycheck 5 hours ago | ||||||||||||||||||||||
It's both, really. The companies selling us the service aren't saying "you should treat this LLM as a potentially hostile user on your machine and set up a new restricted account for it accordingly", they're just saying "download our app! connect it to all your stuff!" and we can't really blame ordinary users for doing that and getting into trouble. | |||||||||||||||||||||||
| ▲ | perching_aix 5 hours ago | parent [-] | ||||||||||||||||||||||
There's a growing ecosystem of guardrailing methods, and these companies are contributing. Antrophic specifically puts in a lot of effort to better steer and characterize their models AFAIK. I primarily use Claude via VS Code, and it defaults to asking first before taking any action. It's simply not the wild west out here that you make it out to be, nor does it need to be. These are statistical systems, so issues cannot be fully eliminated, but they can be materially mitigated. And if they stand to provide any value, they should be. I can appreciate being upset with marketing practices, but I don't think there's value in pretending to having taken them at face value when you didn't, and when you think people shouldn't. | |||||||||||||||||||||||
| |||||||||||||||||||||||