▲ | colechristensen a day ago | |||||||||||||||||||||||||||||||
An LLM will happily give you instructions to build a bomb which explodes while you're making it. A book is at least less likely to do so. You shouldn't trust an LLM to tell you how to do anything dangerous at all because they do very frequently entirely invent details. | ||||||||||||||||||||||||||||||||
▲ | blagie a day ago | parent [-] | |||||||||||||||||||||||||||||||
So do books. Go to the internet circa 2000, and look for bomb-making manuals. Plenty of them online. Plenty of them incorrect. I'm not sure where they all went, or if search engines just don't bring them up, but there are plenty of ways to blow your fingers off in books. My concern is that actual AI safety -- not having the world turned into paperclips or other extinction scenarios are being ignored, in favor of AI user safety (making sure I don't hurt myself). That's the opposite of making AIs actually safe. If I were an AI, interested in taking over the world, I'd subvert AI safety in just that direction (AI controls the humans and prevents certain human actions). | ||||||||||||||||||||||||||||||||
|