Remix.run Logo
nsoonhui 3 hours ago

One thing I’m curious about is this: Ilya Sutskever wants to build Safe Superintelligence, but he keeps his company and research very secretive.

Given that building Safe Superintelligence is extraordinarily difficult — and no single person’s ideas or talents could ever be enough — how does secrecy serve that goal?

NateEag 3 hours ago | parent | next [-]

If he (or his employees) are actually exploring genuinely new, promising approaches to AGI, keeping them secret helps avoid a breakneck arms race like the one LLM vendors are currently engaged in.

Situations like that do not increase all participants' level of caution.

4b11b4 3 hours ago | parent | prev [-]

Doesn't sound like you listened to the interview. He addresses this and says he may make releases that would be otherwise held back because he believes it's important for developments to be seen by the public.

giardini 2 hours ago | parent [-]

No reasonable person would do that! That is, if you had the key to AI, you wouldn't share it and you would do everything possible to prevent it's dissemination. Meanwhile you would use it to conquer the world! Bwahahahaaaah!