Remix.run Logo
tananaev 6 hours ago

I have an open source project and started receiving a lot of security vulnerability reports in the last few months. A lot of them are extremely corner cases, but there were some legit ones. They're all fixed now. Closed source software won't receive any reports, but it will be exploited with AI. So I definitely agree with the message of this article.

lelanthran 4 hours ago | parent | next [-]

> Closed source software won't receive any reports, but it will be exploited with AI.

What makes you so sure that closed-source companies won't run those same AI scanners on their own code?

It's closed to the public, it's not closed to them!

440bx 4 hours ago | parent | next [-]

As someone who works on closed source software and has done for a couple of decades, most companies won't even know about that and of those who do only a fraction give enough of a shit about it to do anything until they are caught with their pants down.

sdoering 3 hours ago | parent | next [-]

Seconded.

Having worked in quite a few agency/consultancy situations, it is far more productive to smash your head against a wall till bleeding, than to get a client to pay for security. The regular answer: "This is table stakes, we pay you for this." Combined with: "Why has velocity gone down, we don't pay you for that security or documentation crap."

There are unexploited security holes in enterprise software you can drive a boring machine through. There is a well paid "security" (aka employee surveillance) company using python2.7 (no, not patched) on each and every machine their software runs on. At some of the biggest companies in this world. They just don't care for updating this, because, why should they. There is no incentive. None.

valeriozen 2 hours ago | parent [-]

Yea, its fundamentally an issue of asymmetric economics.

Running AI scanners internally costs money, dev time, and management buy in to actually fix the mountain of tech debt the scanners uncover. As you said there is no incentive for that

But for bad actors the cost of pointing an LLM at an exposed endpoint or reverse engineered binary has dropped to near zero. The attackers tooling just got exponentially cheaper and faster, while the enterprise defenders budget remained at zero.

njyx 2 hours ago | parent | next [-]

In theory though, there is now a new way for community to support open source, but running vulnerability scans in white-hat mode, reporting and patching. That way they burn tokens for a project they love. Even if they couldn't actually contribute code before.

There should be a way to donate your unused tokens on every cycle to open source like rounding up at the chekout!

lelanthran an hour ago | parent | prev [-]

Hang on, why is it costly for in-house to run AI scanners but near zero for threat actors to do the same?

I've seen multiple proprietary places now including a routine AI scan of their code because it's so cheap and they may as well use-up unused tokens at the end of the week.

I mean, it's literally zero because they already paid for CC for every developer. You can't get cheaper than that.

sevenzero 3 hours ago | parent | prev [-]

Yup, closed source software is a huge pile of shit with good marketing teams. Always was.

baileypumfleet 4 hours ago | parent | prev | next [-]

As I mentioned above, we actually do run these AI scanners on our code, but the problem is it's simply not enough. These AI scanners, including STRIX, don't find everything. Each scanning tool actually finds different results from the other, and so it's impossible to determine a benchmark of what's secure and what's not.

ihaveajob 4 hours ago | parent | prev | next [-]

More eyes, more chances that someone will actually use the tools. Also, the tools and how you use them are not all the same.

phendrenad2 4 hours ago | parent [-]

With enough copies of GPT printing out the same bulleted list, all bugs are

1. shallow

2. hollow

3. flat

...

cyanydeez 2 hours ago | parent | prev | next [-]

Because they're a company. Even if the bar to entry can fit a normal sized american, doesn't mean they will do it, or do it in a systematic way; We know very well that nothing about AI is naturally systematic, so why would you assume it'll happen in a systematic way.

LunicLynx 4 hours ago | parent | prev | next [-]

Came here to say the same. Same tools + private. In security two different defense-mechanisms are always better than one.

bluebarbet 4 hours ago | parent [-]

Same tools A, B and C, but minus tools D, E and F, and with a smaller chance that any tools at all will even be used.

Not claiming that it's a slam dunk for open source, but the inverse does not seem correct either.

lelanthran 3 hours ago | parent | next [-]

> Same tools A, B and C, but minus tools D, E and F,

Why "minus D, E and F"? After all, once you have the harness set up, there's no additional work to add in new models, right?

bluebarbet 2 hours ago | parent [-]

The point being that there are always going to be more eyes, and more knowledge of available tools (i.e. including "D, E and F"), and more experience using them, with open source than with a single in-house dev team.

lelanthran 2 hours ago | parent [-]

There's no more "eyes" though, it's all models, and they are all converging pretty damn fast.

LunicLynx 3 hours ago | parent | prev [-]

Fair enough

suhputt 3 hours ago | parent | prev [-]

[dead]

Aurornis 6 hours ago | parent | prev | next [-]

> Closed source software won't receive any reports

Not from the automated repo scanners, but bug bounty programs can generate a lot of reports in my experience. AI tools are becoming a problem there, too, because amateurs are drawn to the bounties and will submit anything the AI hallucinates.

Closed source companies can (and should!) also run their own security audits rather than passively waiting for volunteers to spend their tokens on it.

switchbak 6 hours ago | parent | next [-]

Those bug bounty programs now have to compete against the market for 0-days. I suppose they always did, but it seems the economics have changed in the favour of the bad actors - at least from my uninformed standpoint.

That still exists in the OSS world too, having your code out there is no panacea. I think we'll see a real swarm of security issues across the board, but I would expect the OSS world to fare better (perhaps after a painful period).

baileypumfleet 4 hours ago | parent | prev | next [-]

That's absolutely our plan. We have bug bounty programs, we have internal AI scanners, we have manual penetration testing, and a number of other things that enable us to push really hard to find this stuff internally rather than relying on either the good people in the open source community or hackers to find our vulnerabilities.

tananaev 6 hours ago | parent | prev | next [-]

Of course everyone should do their own due diligence, but my point is mostly that open source will have many more eyes and more effort put into it, both by owners, but also community.

LunicLynx 4 hours ago | parent [-]

But also tools that might not be nice and report security vulnerabilities, but exploit them.

There is no guarantee that open means that they will be discovered.

0x457 3 hours ago | parent | prev | next [-]

> Not from the automated repo scanners, but bug bounty programs can generate a lot of reports in my experience. AI tools are becoming a problem there, too, because amateurs are drawn to the bounties and will submit anything the AI hallucinates

So just like a pre-AI or worse?

shakna 2 hours ago | parent [-]

Worse. [0]

[0] https://hackerone.com/reports/3595764

bearsyankees 6 hours ago | parent | prev | next [-]

+1, at this point all companies need to be continuously testing their whole stack. The dumb scanners are now a thing of the past, the second your site goes live it will get slammed by the latest AI hackers

bmurphy1976 4 hours ago | parent | prev [-]

You don't even need a bug bounty program. In my experience there's an army of individuals running low-quality security tools spamming every endpoint they can think (webmaster@ support@ contact@ gdpr@ etc.) with silly non-vulnerabilities asking for $100. They suck now but they will get more sophisticated over time.

3 hours ago | parent | next [-]
[deleted]
3 hours ago | parent | prev [-]
[deleted]
rd 5 hours ago | parent | prev | next [-]

I don't follow. It seems obvious that there's more to gain for attackers using AI agents to exploit open source repositories, than there is for good samaritan defenders. In this new closed-source world (for Cal.com), there's nothing stopping them from running their own internal security agent audits, all whilst at least blocking the easiest method of finding zero-days - that is, being open source.

This really just seems like Strix marketing. Which is totally fair, but let's be reasonable here, any open-source business stands to lose way more by continuing to be open-source vs. relying on the benevolence of people scanning their code for them.

bigbadfeline 5 hours ago | parent | next [-]

> It seems obvious that there's more to gain for attackers using AI agents to exploit open source repositories, than there is for good samaritan defenders.

Actually the opposite is obvious - the comment you replied too talked about an abundance of good Samaritan reports - it's strange to speculate on some nebulous "gain" when responding to facts about more then enough reports concerning open source code.

> In this new closed-source world (for Cal.com), there's nothing stopping them from running their own internal security agent audits

That's one good Samaritan for a closed source app vs many for an open source one. Open source wins again.

> any open-source business stands to lose way more

That doesn't make any sense - why would it lose more when it has many more good Samaritans working for it for free?

You seem to forget that the number of vulnerabilities in a certain app is finite, an open source app will reach a secure status much faster than a closed source one, in addition to also gaining from shorter time to market.

In fact, open source will soon be much better and more capable due to new and developing technological and organizational advancements which are next to impossible to happen under a closed source regime.

tetha 4 hours ago | parent [-]

The main drawback is that you will need to be able to patch quick in the next 3-5 years. We are already seeing this in a few solutions getting attention from various AI-driven security topics and our previous stance of letting fixes "ripen" on the shelf for a while - a minor version or two - is most likely turning problematic. Especially if attackers start exploiting faster and botnets start picking up vulnerabilities faster.

But at that point, "fighting fire with fire" is still a good point. Assuming tokens are available, we could just dump the entire code base, changesets and all, our dependent configuration on the code base, company-internal domain knowledge and previous upgrade failures into a folder and tell the AI to figure out upgrade risks. Bonus points if you have decent integration tests or test setups to all of that through.

It won't be perfect, but combine that with a good tiered rollout and increasing velocity of rollouts are entirely possible.

It's kinda funny to me -- a lot of the agentic hype seems to be rewarding good practices - cooperation, documentation, unit testing, integration testing, local test setups hugely.

NaritaAtrox 5 hours ago | parent | prev | next [-]

Some users might be tech sensitive and have the capacity to check the codebase If a company want to use your platform, it can run an audit with its own staff These are people really concerned about the code, not "good samaritans"

sureMan6 5 hours ago | parent | prev | next [-]

A new user is much more likely to scan the codebase and report vulnerabilities so they can be fixed than illegally exploit them since most people aren't criminals

eddythompson80 5 hours ago | parent [-]

Exactly. Who even hacks stuff? Most people will report the issue to earn xp and level up than actually exploit it.

dgb23 5 hours ago | parent | prev [-]

Isn’t that security by obscurity?

hardsnow 6 hours ago | parent | prev | next [-]

I’ve recently set up nightly automated pentest for my open-source project. I’m considering starting to publish these reports as proof of security posture.

If the cost of security audit becomes marginal, it would seem reasonable to expect projects to publish results of such audits frequently.

There’s probably a quite hefty backlog of medium- and low-severity issues in existing projects for maintainers to suffer through first though.

Johnny_Bonk 5 hours ago | parent [-]

What do you use for the pentests? any oss libraries?

hardsnow 4 hours ago | parent [-]

This is a sandbox escape pentest so the only tooling needed is Claude Code and a simple prompt that asks it to follow a workflow: https://github.com/airutorg/airut/blob/main/workflows/sandbo...

giancarlostoro 4 hours ago | parent | prev | next [-]

> Closed source software won't receive any reports, but it will be exploited with AI.

This is what worries me about companies sleeping on using AI to at a bare minimum run code audits and evaluate their security routinely. I suspect as models get better we're going to see companies being hacked at a level never seen before.

Right now we've seen a few different maintainers for open source packages get hacked, who knows how many companies have someone infiltrating their internal systems with the help of AI because nobody wants to do the due dilligence of having a company do security audits on their systems.

baileypumfleet 4 hours ago | parent | prev | next [-]

We actually run AI scanners on our code internally, so we get the benefit of security through obscurity while also layering on AI vulnerability scanning, manual human penetration testing, and a huge array of other defence mechanisms.

advael 3 hours ago | parent [-]

"Security through obscurity" is a term popularized entirely by the long-standing consensus among security researchers and any expert not being paid to say otherwise that this is a bad idea that doesn't work

devstatic 5 hours ago | parent | prev | next [-]

i agree with his too,

but with cal.com i dont think this is about security lol

open source will always be an advantage just you need to decide wether it aligns with you business needs

baq 6 hours ago | parent | prev | next [-]

given what the clankers can do unassisted and what more they can do when you give them ghidra, no software is 'closed source' anymore

embedding-shape 6 hours ago | parent | next [-]

Guess that kind of depends on your definition of "source", I personally wouldn't really agree with you here.

baq 6 hours ago | parent | next [-]

absolutely agree with you if we're talking about clean room reverse engineering; but in the context of finding vulnerabilities it's a completely different story

raddan 2 hours ago | parent | prev [-]

I mean-- to an LLM is there really any difference between the actual source and disassembled source? Informative names and comments probably help them too, but it's not clear that they're necessary.

criddell 6 hours ago | parent | prev [-]

Which models have you had good luck with when working with ghidra?

I analyze crash dumps for a Windows application. I haven't had much luck using Claude, OpenAI, or Google models when working with WinDbg. None of the models are very good at assembly and don't seem to be able to remember the details of different calling conventions or even how some of the registers are typically used. They are all pretty good at helping me navigate WinDbg though.

charcircuit 6 hours ago | parent | prev | next [-]

Assembly is still source code so really it comes down to if the copy protection is obscuring the executable code to the point where the LLM is not able to retrieve it on its own. And if it can't someone motivated could give it the extra help it needs to start tracing how outside inputs gets handled by the application.

kirubakaran 6 hours ago | parent | prev | next [-]

Yes exactly! I'm so glad I took this route with my startup. We can't bury our heads in the sand and think the vulnerabilities don't exist just because we don't know about them.

ofjcihen 5 hours ago | parent [-]

This might be the most painfully obvious advertisement I’ve ever seen on a forum.

kirubakaran 5 hours ago | parent [-]

I didn't mean it as such, but I can see why it would seem so. I've edited the link out now. Thanks for the feedback.

cm2187 5 hours ago | parent | prev [-]

> Closed source software won't receive any reports, but it will be exploited with AI

How so? AI won't have access to the source code. In some cases AI may have access to deployed binaries (if your business deploys binaries) but I am not aware that it has the same capabilities against compiled code than source code.

But in a SAAS world, all AI has access to is your API. It might be still be up to no good but surely you will be several orders of magnitude less exposed than with access to source code.

geoffschmidt 5 hours ago | parent | next [-]

Claude is already shockingly good at reverse engineering. Try it – it's really a step change. It has infinite patience which was always the limited resource in decompiling/deobfuscating most software.

evanelias 3 hours ago | parent [-]

It's SaaS though. You don't have access to the binary to decompile. There's only so much you can reverse-engineer through public URLs and APIs, especially if the SaaS uses any form of automatic detection of bot traffic.

zenmac 3 hours ago | parent [-]

Thanks you. This is what the parent post was trying to say. Don't know why it is down-voted. AI or not, if the API end points are well secured, for example use uuid-v7, then their is little that the ai can gain from just these points.

advael 3 hours ago | parent | prev [-]

The opposite is true. Open source barely matters to attackers, especially ones that can be automated. It mostly enables more people (or agents, or people with agents) to notice and fix your vulnerabilities. Secrecy and other asymmetries in the information landscape disproportionately benefit attackers, and the oft-repeated corporate claim that proprietary software is more secure is summarily discounted by most cybersecurity professionals, whether in industry or academic research. This is also seldom the motivation for making products proprietary, but it's more PR-friendly to claim that closing your source code is for security reasons than it is to say that it's for competitive advantage or control over your customers