| ▲ | Yoric 4 days ago |
| So how do you detect these attacks? |
|
| ▲ | 33a 4 days ago | parent | next [-] |
| We use a mix of static analysis and AI. Flagged packages are escalated to a human review team. If we catch a malicious package, we notify our users, block installation and report them to the upstream package registries. Suspected malicious packages that have not yet been reviewed by a human are blocked for our users, but we don't try to get them removed until after they have been triaged by a human. In this incident, we detected the packages quickly, reported them, and they were taken down shortly after. Given how high profile the attack was we also published an analysis soon after, as did others in the ecosystem. We try to be transparent with how Socket work. We've published the details of our systems in several papers, and I've also given a few talks on how our malware scanner works at various conferences: * https://arxiv.org/html/2403.12196v2 * https://www.youtube.com/watch?v=cxJPiMwoIyY |
| |
| ▲ | Yoric a day ago | parent | next [-] | | So, from what I understand from your paper, you're using ChatGPT with careful prompts? | |
| ▲ | ATechGuy 3 days ago | parent | prev [-] | | You rely on LLMs riddled with hallucinations for malware detection? | | |
| ▲ | jmb99 3 days ago | parent | next [-] | | I'm not exactly pro-AI, but even I can see that their system clearly works well in this case. If you tune the model to favour false positives, with a human review step (that's quick), I can image your response time being cut from days to hours (and your customers getting their updates that much faster). | | | |
| ▲ | Culonavirus 3 days ago | parent | prev | next [-] | | He literally said "Flagged packages are escalated to a human review team." in the second sentence. Wtf is the problem here? | | |
| ▲ | ATechGuy 3 days ago | parent [-] | | What about packages that are not "flagged"? There could be hallucinations when deciding to (or not) "flag packages". | | |
| ▲ | orbital-decay 3 days ago | parent [-] | | >What about packages that are not "flagged"? You can't catch everything with normal static analysis either. LLM just produces some additional signal in this case, false negatives can be tolerated. | | |
| ▲ | ATechGuy 3 days ago | parent [-] | | static analysis DOES NOT hallucinate. | | |
| ▲ | tripzilch a day ago | parent | next [-] | | well, you've never had a non-spam email end up in your spam folder? or the other way around? when static analysis does it, it's called a "misclassification" | |
| ▲ | Twirrim 3 days ago | parent | prev [-] | | So what? They're not replacing standard tooling like static analysis with it. As they mention, it's being used as additional signal alongside static analysis. There are cases an LLM may be able to catch that their static analysis can't currently catch. Should they just completely ignore those scenarios, thereby doing the worst thing by their customers, just to stay purist? What is the worst case scenario that you're envisioning from an LLM hallucinating in this use case? To me the worst case is that it might incorrectly flag a package as malicious, which given they do a human review anyway isn't the end of the world. On the flip side, you've got LLM catching cases not yet recognised by static analysis, that can then be accounted for in the future. If they were just using an LLM, I might share similar concerns, but they're not. |
|
|
|
| |
| ▲ | wiseowise 3 days ago | parent | prev | next [-] | | > We use a mix of static analysis and AI. Flagged packages are escalated to a human review team. “Chat, I have reading comprehension problems. How do I fix it?” | | |
| ▲ | atanasi 2 days ago | parent [-] | | Reading comprehension problems can often be caught with some static analysis combined with AI. |
| |
| ▲ | Mawr 3 days ago | parent | prev [-] | | "LLM bad" Very insightful. |
|
|
|
| ▲ | veber-alex 4 days ago | parent | prev [-] |
| AI based code review with escalation to a human |
| |
| ▲ | Yoric 4 days ago | parent [-] | | I'm curious :) Does the AI detect the obfuscation? | | |
| ▲ | 33a 4 days ago | parent | next [-] | | It's actually pretty easy to detect that something is obfuscated, but it's harder to prove that the obfuscated code is actually harmful. This is why we still have a team of humans review flagged packages before we try to get them taken down, otherwise you would end up with way too many false positives. | | |
| ▲ | Yoric 3 days ago | parent [-] | | Yeah, what I meant is that obfuscation is a strong sign that something needs to be flagged for review. Sadly, there's only a thin line between obfuscation and minification, so I was wondering how many false positives you get. Thanks for the links in your other comment, I'll take a look! |
| |
| ▲ | justusthane 4 days ago | parent | prev [-] | | Probably. It’s trivial to plug some obfuscated code into an LLM and ask it what it does. | | |
| ▲ | spartanatreyu 4 days ago | parent [-] | | Yeah, but just imagine how many false positives and false negatives there would be... |
|
|
|