Remix.run Logo
goalieca 2 days ago

> The productivity gains from LLMs are real, but not in the "replace humans" direction.

It might be the beer talking, but everytime someone comments on AI they have to say something along the lines of "LLM do help". If i'm being really honest, the fact everyone has to mention this in every comment and every blog post and every presentation is because deep down everyone isn't buying it.

protocolture 2 days ago | parent | next [-]

"Having the opposing opinion means deep down, you agree with my opinion"

Wow banger of an argument.

rl3 2 days ago | parent [-]

In GP's defense:

>>It might be the beer talking, ...

fragmede 2 days ago | parent | prev | next [-]

Or maybe they do, but they don't want to get drawn into a totally derailing side conversation about the future of humanity and global warming and it's just a tiny acknowledgement that hey, you can throw an obfuscated blob of minified JavaScript at it and it can take it apart with way less effort from a human, which gets you to the interesting part of the RE question faster than if you had to do it by hand. By all means, don't buy it. I'm not the one getting left behind, however.

jongjong 2 days ago | parent | prev | next [-]

It does help A LOT in the case of security research. Particularly.

For example, I tended to avoid pen testing freelance work before AI because I didn't enjoy the tedious work of reading tons of documentation about random platforms to try to understand how they worked and searching all over StackOverflow.

Now with LLMs, I can give it some random-looking error message and it can clearly and instantly tell me what the error means at a deep tech level, what engine was used, what version, what library/module... I can pen test platforms I have 0 familiarity with.

I just know a few platforms, engines, programming languages really well and I can use this existing knowledge to try to find parallels in other platforms I've never explored before.

The other day, on HackerOne, I found a pretty bad DoS vulnerability in a platform I'd never looked into before, using an engine and programming language I never used professionally; I found the issue within 1 hour of starting my search.

saagarjha 2 days ago | parent [-]

Did you spend another hour confirming your understanding?

jongjong 2 days ago | parent [-]

Yes and at least 30 more minutes to write the report; with the help of LLM. So it still required my analysis skills but at least I was able to do it, relatively fast... Whereas I wouldn't even have considered doing this kind of stuff before due to the hassle associated with research...

There are multiple factors which are pulling me into cybersecurity.

Firstly, it requires less effort from me. Secondly, the amount of vulnerabilities seems to be growing exponentially... Possibly in part because of AI.

bawolff 2 days ago | parent | prev | next [-]

The article is literally about how much/if AI help. There is literally only two possible opinions someone can have on the subject: either they do or they don't.

I'm not really sure what you are expecting here.

tptacek 2 days ago | parent | prev | next [-]

Have you asked anybody who writes exploits full time whether they use LLMs?

big_youth 2 days ago | parent [-]

Yes and Yes. I was suprised walking around the DefCon CTF room lat year and half the screens were AI chats of some sort.

raesene9 2 days ago | parent | next [-]

Heavy use in CTFs doesn't surprise me at all. CTFs often throw curveballs or weird technologies that contestants might not be familiar with. Now you can get a starting point on what's going on, or how something works, instantly from an LLM and it's not a major problem if the LLM is wrong, you may just lose a little time.

spydum 2 days ago | parent | prev [-]

In fact: https://wilgibbs.com/blog/defcon-finals-mcp/

Which makes me think: yes, llms can solve some of this, but still only some. It's more than a research tool, when you combine tools and agentic workflows. I don't see a reason it should slow down.

wickedsight 2 days ago | parent | prev [-]

I feel like it's more because the detractors are very loudly against it and the promoters are very loudly exaggerating the capabilities. Meanwhile, as a bystander who is realistic and is actually using it, you have moments where it's absolutely magnificent and insanely useful and other moments where it kinda sucks, which leads to the somewhat reluctant conclusion that:

> The productivity gains from LLMs are real, but not in the "replace humans" direction.

Meanwhile the people who are explicitly on a side either say that there are no productivity gains or that nobody will have jobs in 6 months.