Remix.run Logo
tbrownaw 21 hours ago

No, we are not on track to create a literal god in the machine. Skynet isn't actually real. LLM system do not have intent in the way that is presupposed by these worries.

This is all much much less of an existential threat than, say, nuclear-armed countries getting into military conflicts, or overworked grad students having lab accidents with pathogen research. Maybe it's as dangerous as the printing press and the wars that that caused?

godelski 19 hours ago | parent | next [-]

It's a much greater existential threat. An entity with intent, abductive reasoning, and self-defined goals is more interpretable. They can fill in the gaps between the letter of an instruction and the intent of an instruction. They may have their own agendas, but they are able to interpret through those gaps without outside help.

But machines? Well they have none of that. They're optimized to make errors difficult to detect. They're optimized to trick you, even as reported by OpenAI[0]. It is a much greater existential threat than the overworked grad student because I can at least observe them getting flustered, making mistakes, and have much more warning like by the very nature of over working them. You can see it on their face. But the machine? It'll happily chug along.

Have you never written a program that ends up doing something you didn't intend it to?

Have you never dropped tables? Deleted files? Destroyed things you never intended to?

The machine doesn't second guess you, it just says "okay :)"

[0] https://cdn.openai.com/pdf/34f2ada6-870f-4c26-9790-fd8def563...

hollerith 19 hours ago | parent | prev [-]

We disagree on that.