|
| ▲ | convnet 3 hours ago | parent | next [-] |
| > its cyber capabilities are not as advanced as those of Mythos Preview (indeed, during its training we experimented with efforts to differentially reduce these capabilities) I wonder if this means that it will simply refuse to answer certain types of questions, or if they actually trained it to have less knowledge about cyber security. If it's the latter, then it would be worse at finding vulnerabilities in your own code, assuming it is willing to do that. |
|
| ▲ | nicce 3 hours ago | parent | prev | next [-] |
| There is no way model can know the origin of the code. |
|
| ▲ | xlbuttplug2 4 hours ago | parent | prev | next [-] |
| May not be very effective if so. I'm assuming finding vulnerabilities in open source projects is the hard part and what you need the frontier models for. Writing an exploit given a vulnerability can probably be delegated to less scrupulous models. |
|
| ▲ | whatisthiseven 4 hours ago | parent | prev [-] |
| Currently 4.7 is suspicious of literally every line of code. May be a bug, but it shows you how much they care about end-users for something like this to have such a massive impact and no one care before release. Good luck trying to do anything about securing your own codebase with 4.7. |