| ▲ | magicmicah85 7 hours ago | |
GPT is impressive with a consistent 0% false positive rate across models, yet its ability to detect is as high as 18%. Meanwhile Claude Opus 4.6 is able to detect up to 46% of backdoors, but has a 22% false positive rate. It would be interesting to have an experiment where these models are able to test exploiting but their alignment may not allow that to happen. Perhaps combining models together can lead to that kind of testing. The better models will identify, write up "how to verify" tests and the "misaligned" models will actually carry out the testing and report back to the better models. | ||
| ▲ | sdenton4 6 hours ago | parent [-] | |
It would be really cool if someone developed some standard language and methodology for measuring the success of binary classificaiton tasks... Oh, wait, we have had that for a hundred years - somehow it's just entirely forgotten when generative models are involved. | ||