| ▲ | lelanthran 8 hours ago |
| > Closed source software won't receive any reports, but it will be exploited with AI. What makes you so sure that closed-source companies won't run those same AI scanners on their own code? It's closed to the public, it's not closed to them! |
|
| ▲ | 440bx 8 hours ago | parent | next [-] |
| As someone who works on closed source software and has done for a couple of decades, most companies won't even know about that and of those who do only a fraction give enough of a shit about it to do anything until they are caught with their pants down. |
| |
| ▲ | sdoering 7 hours ago | parent | next [-] | | Seconded. Having worked in quite a few agency/consultancy situations, it is far more productive to smash your head against a wall till bleeding, than to get a client to pay for security. The regular answer: "This is table stakes, we pay you for this." Combined with: "Why has velocity gone down, we don't pay you for that security or documentation crap." There are unexploited security holes in enterprise software you can drive a boring machine through. There is a well paid "security" (aka employee surveillance) company using python2.7 (no, not patched) on each and every machine their software runs on. At some of the biggest companies in this world. They just don't care for updating this, because, why should they. There is no incentive. None. | | |
| ▲ | valeriozen 6 hours ago | parent [-] | | Yea, its fundamentally an issue of asymmetric economics. Running AI scanners internally costs money, dev time, and management buy in to actually fix the mountain of tech debt the scanners uncover. As you said there is no incentive for that But for bad actors the cost of pointing an LLM at an exposed endpoint or reverse engineered binary has dropped to near zero. The attackers tooling just got exponentially cheaper and faster, while the enterprise defenders budget remained at zero. | | |
| ▲ | njyx 6 hours ago | parent | next [-] | | In theory though, there is now a new way for community to support open source, but running vulnerability scans in white-hat mode, reporting and patching. That way they burn tokens for a project they love. Even if they couldn't actually contribute code before. There should be a way to donate your unused tokens on every cycle to open source like rounding up at the chekout! | | |
| ▲ | ValentineC 2 hours ago | parent [-] | | That sounds like a great idea. I'd love to be able to contribute the remainder of my monthly AI subscriptions for something like this, especially since some of them bill and refresh their quotas by calendar month. |
| |
| ▲ | lelanthran 5 hours ago | parent | prev [-] | | Hang on, why is it costly for in-house to run AI scanners but near zero for threat actors to do the same? I've seen multiple proprietary places now including a routine AI scan of their code because it's so cheap and they may as well use-up unused tokens at the end of the week. I mean, it's literally zero because they already paid for CC for every developer. You can't get cheaper than that. |
|
| |
| ▲ | sevenzero 6 hours ago | parent | prev [-] | | Yup, closed source software is a huge pile of shit with good marketing teams. Always was. |
|
|
| ▲ | baileypumfleet 7 hours ago | parent | prev | next [-] |
| As I mentioned above, we actually do run these AI scanners on our code, but the problem is it's simply not enough. These AI scanners, including STRIX, don't find everything. Each scanning tool actually finds different results from the other, and so it's impossible to determine a benchmark of what's secure and what's not. |
| |
| ▲ | topopopo 3 hours ago | parent [-] | | I think it makes it all the more apparent that writing EAL4 code with as little design competence as possible was taking advantage of some strange scarcity economics.. It's now even easier to make something with endless technical debt and security vs backwards compatibility liability but is anyone going to keep paying for things that aren't correct and to the point if some market participants structure their agent usage toward verifiable quality and don't actually have more cost any more? |
|
|
| ▲ | ihaveajob 8 hours ago | parent | prev | next [-] |
| More eyes, more chances that someone will actually use the tools. Also, the tools and how you use them are not all the same. |
| |
| ▲ | phendrenad2 8 hours ago | parent [-] | | With enough copies of GPT printing out the same bulleted list, all bugs are 1. shallow 2. hollow 3. flat ... |
|
|
| ▲ | LunicLynx 8 hours ago | parent | prev | next [-] |
| Came here to say the same. Same tools + private. In security two different defense-mechanisms are always better than one. |
| |
| ▲ | bluebarbet 7 hours ago | parent [-] | | Same tools A, B and C, but minus tools D, E and F, and with a smaller chance that any tools at all will even be used. Not claiming that it's a slam dunk for open source, but the inverse does not seem correct either. | | |
| ▲ | lelanthran 6 hours ago | parent | next [-] | | > Same tools A, B and C, but minus tools D, E and F, Why "minus D, E and F"? After all, once you have the harness set up, there's no additional work to add in new models, right? | | |
| ▲ | bluebarbet 5 hours ago | parent [-] | | The point being that there are always going to be more eyes, and more knowledge of available tools (i.e. including "D, E and F"), and more experience using them, with open source than with a single in-house dev team. | | |
| ▲ | lelanthran 5 hours ago | parent [-] | | There's no more "eyes" though, it's all models, and they are all converging pretty damn fast. | | |
| ▲ | bluebarbet 3 hours ago | parent [-] | | If true then logically it will be sufficient to run this "master model" once before any code release for the level playing field to be restored. After all, even open-source software is private until it is released. |
|
|
| |
| ▲ | LunicLynx 6 hours ago | parent | prev [-] | | Fair enough |
|
|
|
| ▲ | suhputt 6 hours ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | cyanydeez 5 hours ago | parent | prev [-] |
| Because they're a company. Even if the bar to entry can fit a normal sized american, doesn't mean they will do it, or do it in a systematic way; We know very well that nothing about AI is naturally systematic, so why would you assume it'll happen in a systematic way. |