| ▲ | tfehring 6 hours ago | |||||||
(Disclosure, I'm a former OpenAI employee and current shareholder.) I have two qualms with this deal. First, Sam's tweet [0] reads as if this deal does not disallow autonomous weapons, but rather requires "human responsibility" for them. I don't think this is much of an assurance at all - obviously at some level a human must be responsible, but this is vague enough that I worry the responsible human could be very far out of the loop. Second, Jeremy Lewin's tweet [1] indicates that the definitions of these guardrails are now maintained by DoW, not OpenAI. I'm currently unclear on those definitions and the process for changing them. But I worry that e.g. "mass surveillance" may be defined too narrowly for that limitation to be compatible with democratic values, or that DoW could unilaterally make it that narrow in the future. Evidently Anthropic insisted on defining these limits itself, and that was a sticking point. Of course, it's possible that OpenAI leadership thoughtfully considered both of these points and that there are reasonable explanations for each of them. That's not clear from anything I've seen so far, but things are moving quickly so that may change in the coming days. [0] https://x.com/sama/status/2027578652477821175 [1] https://x.com/UnderSecretaryF/status/2027594072811098230 | ||||||||
| ▲ | syllogism an hour ago | parent | next [-] | |||||||
I don't understand how any sort of deal is defensible in the circumstances. Government: "Anthropic, let us do whatever we want" Anthropic: "We have some minimal conditions." Government: "OpenAI, if we blast Anthropic into the sun, what sort of deal can we get?" OpenAI: "Uh well I guess I should ask for those conditions" Government: blasts Anthropic into the sun "Sure whatever, those conditions are okay...for now." By taking the deal with the DoW, OpenAI accepts that they can be treated the same way the government just treated Anthropic. Does it really matter what they've agreed? | ||||||||
| ▲ | spondyl 4 hours ago | parent | prev [-] | |||||||
Jeremy Lewin's tweet referenced that "all lawful use" is the particular term that seems to be a particular sticking point. While I don't live in the US, I could imagine the US government arguing that third party doctrine[0] means that aggregation and bulk-analysis of say; phone record metadata is "lawful use" in that it isn't /technically/ unlawful, although it would be unethical. Another avenue might also be purchasing data from ad brokers for mass-analysis with LLMs which was written about in Byron Tau's Means of Control[1] [0] https://en.wikipedia.org/wiki/Third-party_doctrine [1] https://www.penguinrandomhouse.com/books/706321/means-of-con... | ||||||||
| ||||||||