| |
| ▲ | catoc 3 hours ago | parent | next [-] | | It does if the person making the statement has a track record, proven expertise on the topic - and in this case… it actually may mean something to other people | | |
| ▲ | shimman 3 hours ago | parent | next [-] | | Yes, as we all know that unsourced unsubstantiated statements are the best way to verify claims regarding engineering practices. Especially when said person has a financial stake in the outcomes of said claims. No conflict of interest here at all! | | |
| ▲ | tptacek 3 hours ago | parent | next [-] | | I have zero financial stake in Anthropic and more broadly my career is more threatened by LLM-assisted vulnerability research (something I do not personally do serious work on) than it is aided by it, but I understand that the first principal component of casual skepticism on HN is "must be a conflict of interest". | | |
| ▲ | godelski an hour ago | parent | next [-] | | > but I understand that the first principal component of casual skepticism on HN is "must be a conflict of interest".
I think the first principle should be "don't trust random person on the internet"(But if you think Tom is random, look at his profile. First link, not second) | |
| ▲ | malfist 2 hours ago | parent | prev [-] | | You still haven't answered why I should care that you, a stranger on the internet, believes some unsubstantiated hearsay? | | |
| ▲ | wtallis 2 hours ago | parent | next [-] | | Take a look at https://news.ycombinator.com/leaders The user you're suspicious of is pretty well-known in this community. | | |
| ▲ | godelski an hour ago | parent | next [-] | | Someone's credibility cannot be determined by their point counts. Holy fuck is that not a way to evaluate someone in the slightest. Points don't matter. Instead look at their profile... Points != creds. Creds == creds. Don't be fucking lazy and rely on points, especially when they link their identity. | | |
| ▲ | wtallis an hour ago | parent [-] | | I wasn't at all saying that points = credibility. I was saying that points = not unknown. Enough people around here know who he is, and if he didn't have credibility on this topic he'd be getting down voted instead of voted to the top. | | |
| ▲ | godelski an hour ago | parent [-] | | Is that meaningfully different? If you read malfist's point as "tptacek's point isn't valuable because it's from some random person on the internet" then the problem is "random person on the internet" = "unknown credentials". In group, out group, notoriety, points, whatever are not the issue. I'll put it this way, I don't give a shit about Robert Downy Jr's opinion on AI technology. His notoriety "means nothing to anybody". But instead, I sure do care about Hinton's (even if I disagree with him). malfist asked why they should care. You said points. You should have said "tptacek is known to do security work, see his profile". Done. Much more direct. Answers the actual question. Instead you pointed to points, which only makes him "not a stranger" at best but still doesn't answer the question. Intended or not "you should believe tptacek because he has a lot of points" is a reasonable interpretation of what you said. |
|
| |
| ▲ | drekipus 2 hours ago | parent | prev | next [-] | | [flagged] | |
| ▲ | delusional 2 hours ago | parent | prev [-] | | [flagged] | | |
| |
| ▲ | dinunnob 2 hours ago | parent | prev [-] | | [flagged] |
|
| |
| ▲ | catoc 3 hours ago | parent | prev [-] | | A security researcher claiming that they’re not skeptical about LLMs being able to do part of their job - where is the financial stake in that? |
| |
| ▲ | dvfjsdhgfv an hour ago | parent | prev [-] | | It doesn't mean we have to agree: https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-arti... | | |
| ▲ | tptacek 11 minutes ago | parent [-] | | Here's a fun exercise: go email the author of that blog (he's very nice) and ask how much of it he still stands by. |
|
| |
| ▲ | pchristensen 3 hours ago | parent | prev | next [-] | | Nobody is right about everything, but tptacek's takes on software security are a good place to start. | | |
| ▲ | tptacek 3 hours ago | parent [-] | | I'm interested in whether there's a well-known vulnerability researcher/exploit developer beating the drum that LLMs are overblown for this application. All I see is the opposite thing. A year or so ago I arrived at the conclusion that if I was going to stay in software security, I was going to have to bring myself up to speed with LLMs. At the time I thought that was a distinctive insight, but, no, if anything, I was 6-9 months behind everybody else in my field about it. There's a lot of vuln researchers out there. Someone's gotta be making the case against. Where are they? From what I can see, vulnerability research combines many of the attributes that make problems especially amenable to LLM loop solutions: huge corpus of operationalizable prior art, heavily pattern dependent, simple closed loops, forward progress with dumb stimulus/response tooling, lots of search problems. Of course it works. Why would anybody think otherwise? You can tell you're in trouble on this thread when everybody starts bringing up the curl bug bounty. I don't know if this is surprising news for people who don't keep up with vuln research, but Daniel Stenberg's curl bug bounty has never been where all the action has been at in vuln research. What, a public bug bounty attracted an overwhelming amount of slop? Quelle surprise! Bug bounties have attracted slop for so long before mainstream LLMs existed they might well have been the inspiration for slop itself. Also, a very useful component of a mental model about vulnerability research that a lot of people seem to lack (not just about AI, but in all sorts of other settings): money buys vulnerability research outcomes. Anthropic has eighteen squijillion dollars. Obviously, they have serious vuln researchers. Vuln research outcomes are in the model cards for OpenAI and Anthropic. | | |
| ▲ | NitpickLawyer 3 hours ago | parent | next [-] | | > You can tell you're in trouble on this thread when everybody starts bringing up the curl bug bounty. I don't know if this is surprising news for people who don't keep up with vuln research, but Daniel Stenberg's curl bug bounty has never been where all the action has been at in vuln research. What, a public bug bounty attracted an overwhelming amount of slop? Quelle surprise! Bug bounties have attracted slop for so long before mainstream LLMs existed they might well have been the inspiration for slop itself. Yeah, that's just media reporting for you. As anyone who ever administered a bug bounty programme on regular sites (h1, bugcrowd, etc) can tell you, there was an absolute deluge of slop for years before LLMs came to the scene. It was just manual slop (by manual I mean running wapiti and c/p the reports to h1). | | |
| ▲ | steveklabnik 3 hours ago | parent | next [-] | | I used to answer security vulnerability emails to Rust. We'd regularly get "someone ran an automated tool and reports something that's not real." Like, complaints about CORS settings on rust-lang.org that would let people steal cookies. The website does not use cookies. I wonder if it's gotten actively worse these days. But the newness would be the scale, not the quality itself. | |
| ▲ | tptacek 3 hours ago | parent | prev | next [-] | | I did some triage work for clients at Latacora and I would rather deal with LLM slop than argue with another person 10 time zones away trying to convince me that something they're doing in the Chrome Inspector constitutes a zero-day. At least there's a possibility that LLM slop might contain some information. You spent tokens on it! | |
| ▲ | wrs 2 hours ago | parent | prev [-] | | The new slop can be much harder to recognize and reject than the old "I ran XYZ web scanner on your site" slop. | | |
| ▲ | tptacek 2 hours ago | parent [-] | | POCs are now so cheap that "POC||GTFO" is a perfectly reasonable bar to set on a bounty program. |
|
| |
| ▲ | JumpCrisscross 2 hours ago | parent | prev [-] | | > I was going to have to bring myself up to speed with LLMs What did you do beyond playing around with them? > Of course it works. Why would anybody think otherwise? Sam Altman is a liar. The folks pitching AI as an investment were previously flinging SPACs and crypto. (And can usually speak to anything technical about AI as competently as battery chemistry or Merkle trees.) Copilot and Siri overpromised and underdelivered. Vibe coders are mostly idiots. The bar for believability in AI is about as high as its frontier's actual achievements. | | |
| ▲ | tptacek an hour ago | parent | next [-] | | I still haven't worked out for myself where my career is going with respect to this stuff. I have like 30% of a prototype/POC active testing agent (basically, Burp Suite but as an agent), but I haven't had time to move it forward over the last couple months. In the intervening time, one of the beliefs I've acquired is that the gap between effective use of models and marginal use is asking for ambitious enough tasks, and that I'm generally hamstrung by knowing just enough about anything they'd build to slow everything down. In that light, I think doing an agent to automate the kind of bugfinding Burp Suite does is probably smallball. Many years ago, a former collaborator of mine found a bunch of video driver vulnerabilities by using QEMU as a testing and fault injection harness. That kind of thing is more interesting to me now. I once did a project evaluating an embedded OS where the modality was "port all the interesting code from the kernel into Linux userland processes and test them directly". That kind of thing seems especially interesting to me now too. | |
| ▲ | azakai 2 hours ago | parent | prev [-] | | Plenty of reasons to be skeptical, but also we know that LLMs can find security vulnerabilities since at least 2024: https://projectzero.google/2024/10/from-naptime-to-big-sleep... Some followup findings reported in point 1 here from 2025: https://blog.google/innovation-and-ai/technology/safety-secu... So what Anthropic are reporting here is not unprecedented. The main thing they are claiming is an improvement in the amount of findings. I don't see a reason to be overly skeptical. | | |
| ▲ | jsnell an hour ago | parent [-] | | I'm not sure the volume here is particularly different to past examples. I think the main difference is that there was no custom harness, tooling or fine-tuning. It's just the out of the box capabilities for a generally available model and a generic agent. |
|
|
|
| |
| ▲ | JumpCrisscross 2 hours ago | parent | prev | next [-] | | > that means nothing to anybody else Someone else here! Ptacek saying anything about security means a lot to this nobody. To the point that I'm now going to take this seriously where before I couldn't see through the fluff. | |
| ▲ | Uehreka an hour ago | parent | prev | next [-] | | How have you been here 12 years and not noticed where and how often the username tptacek comes up? | |
| ▲ | arduanika 2 hours ago | parent | prev | next [-] | | It might mean nothing to you, but tptacek's words means at least something to many of us here. Also, he's a friend of someone I know & trust irl. But then again, who am I to you, but yet another anon on a web forum. | |
| ▲ | hiccup_socks an hour ago | parent | prev [-] | | [dead] |
|