| ▲ | eagerpace 5 hours ago |
| Is this the new “gain of function” research? |
|
| ▲ | saltcured 4 hours ago | parent | next [-] |
| Isn't it more like "imaginary function"? People keep imagining that you can tell an agent to police itself. |
| |
| ▲ | bigstrat2003 3 hours ago | parent [-] | | Yep the whole thing is retarded. You cannot trust that a non-deterministic program (i.e. an LLM) will ever do what you actually tell it to do. Letting those things loose on the command line is incredibly stupid, but people out there don't care because they think "it's the future!". | | |
| ▲ | wojciii 2 hours ago | parent [-] | | Shhh .. everyone want AI. Just let them. The ones that don't understand technology will get burned by it. This is nothing new. |
|
|
|
| ▲ | logicchains 5 hours ago | parent | prev [-] |
| That would be deliberately creating malicious AIs and trying to build better sandboxes for them. |
| |
| ▲ | octopoc 4 hours ago | parent [-] | | Imagine if you could physical disconnect your country from the internet, then drop malware like this on everyone else. | | |
|