Remix.run Logo
neuralkoi 11 hours ago

> The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking.

If current LLMs are ever deployed in systems harboring the big red button, they WILL most definitely somehow press that button.

arthurcolle 11 hours ago | parent | next [-]

US MIC are already planning on integrating fucking Grok into military systems. No comment.

Havoc 8 hours ago | parent | next [-]

Including classified systems. What could possibly go wrong

blibble 8 hours ago | parent | prev [-]

the US is going to stop the chinese by mass production of illegal pornography?

groby_b 11 hours ago | parent | prev [-]

fwiw, the same is true for humans. Which is why there's a whole lot of process and red tape around that button. We know how to manage risk. We can choose to do that for LLM usage, too.

If instead we believe in fantasies of a single all-knowing machine god that is 100% correct at all times, then... we really just have ourselves to blame. Might as well just have spammed that button by hand.