| ▲ | anon373839 2 days ago | |||||||
This is a really interesting and well written case update/critique. I agree with the author's that the judge's reliance on Anthropic's fine-print privacy policy does not satisfy the actual legal standard governing privilege. Or if it did, it would raise extremely thorny issues around all of the cloud-based technology products that lawyers and clients use every day. That said, I note that the court's opinion specifically calls out Anthropic's practice of *training models on user data* as a reason why the defendant could not have expected confidentiality. I do not use these cloud models for anything important precisely because they are operated by companies, like Anthropic, that are completely untrustworthy. | ||||||||
| ▲ | quietsegfault 2 days ago | parent [-] | |||||||
That was my first thought. If the test is “talking to a lawyer”, and all tools not directly controlled by the lawyer fall outside of the safe haven, then any cloud legal tools are not safe. What a stupid ruling. | ||||||||
| ||||||||