| ▲ | tyre 2 hours ago | |
From the recent New Yorker piece on Sam: “My vibes don’t match a lot of the traditional A.I.-safety stuff,” Altman said. He insisted that he continued to prioritize these matters, but when pressed for specifics he was vague: “We still will run safety projects, or at least safety-adjacent projects.” When we asked to interview researchers at the company who were working on existential safety—the kinds of issues that could mean, as Altman once put it, “lights-out for all of us”—an OpenAI representative seemed confused. “What do you mean by ‘existential safety’?” he replied. “That’s not, like, a thing.” | ||
| ▲ | actionfromafar an hour ago | parent [-] | |
Amusing! Even if they believe that, they should know the company communicated the opposite earlier. | ||