Remix.run Logo
aeternum 11 hours ago

Remember this one of OpenAI's principles?

> We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”

What do people think is the probability that OpenAI would ever actually do this?

margalabargala 8 hours ago | parent | next [-]

> value-aligned

If the other project were equally aligned with the value OpenAI places on consolidating power and wealth onto Sam Altman, I don't see why OpenAI wouldn't do what they say.

tanseydavid 5 hours ago | parent [-]

I think you hit the bullseye here, but they were actually hoping that you would infer this to mean: "value-aligned with human-kind."

thimabi 9 hours ago | parent | prev | next [-]

Extremely high!

Those various caveats there — “value-aligned”, “safety-conscious”, “case-by-case agreements” — probably mean that no project ever will be “worthy” of OpenAI’s assistance.

In the unlikely event that an abiding project appears, then yeah, sure, it’s very probable that OpenAI would assist it :)

vessenes 8 hours ago | parent | prev | next [-]

Nearly 100%.

Let me reframe this for you — “If we find a team substantially closer to AGI than us, we would seek to merge with them.“

dbgrman 9 hours ago | parent | prev [-]

knowing sama, that's exactly what he would do. except, the story wouldn't end with openai collaborating with a competitor who is better than them, openai will collaborate with them to ensure they're destroyed from inside out so that only openai can dominate eventually. "Eventual dominance" architecture, you know.