| ▲ | iainctduncan 2 days ago |
| As part of my work in technical diligence, I create medium-long form content marketing material on topics germane to PE investment in tech. In the last six months I did a series (not yet published) on the state of security in the age of gen-AI. Basically, we are entering the ransomware apocalypse. It is insane what a godsend gen-AI has been to the cybercrime sector. When all you need to do is make something good enough to fool some of the people some of the time, genAI is perfect. Things that used to work reliably - like trusting google ads or sponsored links not to be malvertizing sites - are meaningless now that gangs can trivially spin up networks of thousands of fake interacting sites and linked profiles to sneak by fraud detection. Phishing attacks are ridiculously sophisticated, combining voice, text, and video impersonation. Supply chain attacks are going to mean package managers are handgrenades. Ransomware gangs are running full on SaSS services allowing script kiddies access to big gun material. Attacks that were previously only in reach of nation-state-sponsored actors are now available for peanuts. And all of this is going to worse because of everyone and their dog using gen-AI to pump out huge amounts of vulnerable code. And then there is the world of prompt engineering for data exfiltration... If you are young and wanting a promising trade in tech, security would absolutely be a good choice. Shit is going to get CRAZY. |
|
| ▲ | sifar 2 days ago | parent | next [-] |
| I get amused that people don't realize that genAI is an existential threat to the internet and everything that has been built on it. 1) One can no longer trust things out on the web.
2) One no longer needs things out on the web. For 1), I hope the defense mechanism kicks in time to bake security into our computing culture and pervades throughout the stack. |
| |
| ▲ | ellg 2 days ago | parent | next [-] | | You were trusting things on the internet before LLM's? | | |
| ▲ | pizza234 2 days ago | parent | next [-] | | Careful system administration and web browsing were relatively safe; nowadays, even upgrading the local libraries carries risk that must be assessed. | | |
| ▲ | salawat a day ago | parent [-] | | It has always been that way. Literally the only distro that encourages an update process with the requisite effort you should be putting in is Slackware. You should be reading the source code you build. You should be building from source. You should fully understand your toolchains. Binary only distros have always been the equivalent of wearing a condom to have sex. Usually fine, but technically outsourcing the hard work to someone that lets be real, 90% never get to know well enough to credibly trust to any degree. NPM & proglang level package management just doubled down on the real-estate you had to shift through. Being a responsible programmer/sys admin has always been read heavy, as long as I've been alive. Write only code is antithetical to the basis of running a trustworthy system. |
| |
| ▲ | nine_k 2 days ago | parent | prev | next [-] | | The Internet is quite fine at delivering packages over encrypted channels which I can trust. (Except where interdicted by governments, like in China, India, Russia, Turkyie,..) The Web is a rather different beast, but the question is not "can you trust the Internet", but "can you trust a random website", and now even "can you trust a previously trustworthy website". You of course should not trust any pictures or videos as critical evidence, they should be corroborated by other means. But this has been true for several years now. | |
| ▲ | sifar 2 days ago | parent | prev | next [-] | | To clarify, I meant it from a lay person's perspective. I do realize that one can argue if the average person will have developed this awareness now. The difference this time, I feel, is that the genAI tools are widely available for normal people to experiment with which will hopefully help develop this visceral feeling. | |
| ▲ | UncleMeat 2 days ago | parent | prev [-] | | While there genuinely was fake content and astroturfed material on the web prior to LLMs, the cost to produce this stuff has fallen enormously. A major corporation or a state actor might pay a bunch of money for inorganic content but it was hard for some rando in Estonia to spin up a network of fake content to monetize on tiktok or whatever. This leads to way more fake content about a much wider range of topics. |
| |
| ▲ | tim333 a day ago | parent | prev | next [-] | | I can't see an existential threat as in the internet as in it no longer existing. It's busier than ever although maybe with more junk. | |
| ▲ | dvfjsdhgfv 2 days ago | parent | prev | next [-] | | > 1) One can no longer trust things out on the web. I assume you mean software, because we haven't been trusting other things on the web already for decades. As for software, everybody interested knew about inherent insecurity of supply chain of modern software but the solutions proposed were too expensive. We need an order of magnitude more money lost for organizations to start switching from today's security theater to a model with security built in. | | |
| ▲ | sifar 2 days ago | parent [-] | | In general and for software in particular too :). For general see my response to ellg. Even though we were aware of the insecurity of the supply chain,
1) In practice we tend to ignore it except for mission critical cases. We still do.
2) Autonomous vulnerability/exploitation at scale was difficult and reserved for high value targets. What you said will be accelerated by 2) now. |
| |
| ▲ | classified 12 hours ago | parent | prev | next [-] | | > I hope the defense mechanism kicks in time to bake security into our computing culture and pervades throughout the stack. Not in this world. It would create friction in the money printing machines. | |
| ▲ | zolland 2 days ago | parent | prev [-] | | I can't tell if this is satire or not |
|
|
| ▲ | mmarian 2 days ago | parent | prev | next [-] |
| > If you are young and wanting a promising trade in tech, security would absolutely be a good choice. Shit is going to get CRAZY. I personally would still recommend software engineering. Security in far majority of places is still checkbox and cost driven. Outrage happens around incidents, but rarely are people willing to invest meaningful in their people. Security SaaS on the other hand, is doing great, so anything driving revenue there is good. |
|
| ▲ | strombofulous 2 days ago | parent | prev | next [-] |
| > If you are young and wanting a promising trade in tech, security would absolutely be a good choice. If AI is capable of performing these attacks, what would stop AI from replacing the security engineers? |
| |
| ▲ | lelanthran 2 days ago | parent | next [-] | | > If AI is capable of performing these attacks, what would stop AI from replacing the security engineers? Because the threat model is one-sided - if an AI attack fails, the controller simply moves to the next target. If an AI defense fails, the victim is fucked. Therefore, there is still value in being the human in Cyber Security (however you are supposed to capitalise that!) There are still protections and mitigations that targets can do, but those things require humans. The things that attackers can do require no humans in the loop. | | |
| ▲ | AlecSchueler a day ago | parent | next [-] | | > Therefore, there is still value in being the human in Cyber Security Why? Your logic applies equally well to humans. If the AI attacker fails they move onto the next target, if the human defence fails the victim is fucked. > There are still protections and mitigations that targets can do, but those things require humans. Which things would you point to here? | | |
| ▲ | lelanthran a day ago | parent [-] | | > Why? Your logic applies equally well to humans. If the AI attacker fails they move onto the next target, if the human defence fails the victim is fucked. I didn't claim that the human defence is the only layer. Your analogy is only valid if my claim is that it's AI attackers vs Human defenders. It's not. It's AI attackers vs AI + Human defenders. > Which things would you point to here? If you cannot imagine any value that a human can add to an AI defence, then this conversation is effectively over; I am not in the mood to enumerate the value that a human can add to AI defence. | | |
| ▲ | AlecSchueler a day ago | parent [-] | | > If you cannot imagine any value that a human can add to an AI defence, then this conversation is effectively over I honestly find that a bizarre response in the middle of a discussion but you do you. Maybe someone else could humour me since you're not in the mood to expand on the point that you made? The topic of the thread was that the ability of the AI tooling is outpacing what individuals can handle. Why would a human then be in a position to defend better than an AI when an AI is in a better position to attack than a human? | | |
| ▲ | lelanthran 7 hours ago | parent [-] | | >> It's AI attackers vs AI + Human defenders. > Why would a human then be in a position to defend better than an AI when an AI is in a better position to attack than a human? I did not make the claim that humans are in a better position to defend. |
|
|
| |
| ▲ | integralid a day ago | parent | prev [-] | | >Because the threat model is one-sided - if an AI attack fails, the controller simply moves to the next target. If an AI defense fails, the victim is fucked. This was always the case? Security is asymmetric and attacker only needs to succeed once. |
| |
| ▲ | _aavaa_ 2 days ago | parent | prev | next [-] | | Red team has to be lucky once, blue team has to be perfect. How many places take red teaming seriously now? Compare how fast real attackers could iterate vs the defenders. | | |
| ▲ | UncleMeat 2 days ago | parent | next [-] | | This is less true than it seems. It is pretty rare to go from vuln to simple exploit for systems that people care about. There are plenty of vulns in chrome or whatever that were difficult to actually weaponize because you need just the right kind of gadgets to create a sandbox escape and the vuln only lets you write to ineffective memory addresses. | |
| ▲ | charcircuit 2 days ago | parent | prev [-] | | Stealing a bitcoin wallet by cracking the private key for it also requires red team to be lucky once. Once AI security gets to the point where the probability is infinitesimal for causing actual harm to the business it will be fine. | | |
| ▲ | _aavaa_ 2 days ago | parent [-] | | Yes, and on an infinite time horizon we are all dead. It’s the time between then and now that we’re talking about. | | |
| ▲ | charcircuit 2 days ago | parent [-] | | Existing concepts like defense in depth make it exponentially harder for an AI to build a full exploit chain. Even with a full exploit chain with one mistake you'll trigger a detection system which can fool your attack. |
|
|
| |
| ▲ | chucky_z 2 days ago | parent | prev | next [-] | | The more I use AI and my workplace buys into it, the more I’m doing person to person work in a security context. | | | |
| ▲ | weare138 2 days ago | parent | prev | next [-] | | They're not and they won't. I'm from genx and have a background in infosec. I don't agree that AI is the cause of this sudden surge in activity or if this is even a sudden surge. This stuff was always occurring if you were paying attention. It just making the mainstream news now. Geopolitics is the cause of the recent uptick in activity. Many of these groups are state sponsored or just fronts for nation-states themselves. genAI just makes it easier for people further down the chain to go after low hanging fruit. The most significant impact genAI is having on infosec is creating work for those people in infosec through vibe coding and turning untested AI systems loose on internal networks. genAI just lets developers and admins shoot themselves in the foot faster. genAI is an artificial intern. | |
| ▲ | dvfjsdhgfv 2 days ago | parent | prev [-] | | LLM-based software is just another layer to be hacked. |
|
|
| ▲ | operatingthetan 2 days ago | parent | prev | next [-] |
| This just seems like the result is people are going to be driven off the internet. It will simply not be safe for the layperson. |
| |
| ▲ | yoyohello13 2 days ago | parent | next [-] | | Literally the Blackwall from Cyberpunk 2077. | |
| ▲ | tokioyoyo 2 days ago | parent | prev | next [-] | | Most people's internet is Instagram + Games from AppStore + TikTok + Netflix + Banking Apps. Everything is within specific walls and guardrails. | |
| ▲ | pesus 2 days ago | parent | prev | next [-] | | Sounds like an ultimately good thing to me. It was an interesting experiment, but the negatives largely outweigh the positives at this point. (I do realize the irony of writing this on HN, but I digress) | |
| ▲ | babycheetahbite 2 days ago | parent | prev | next [-] | | Just in general, the outcome of where technology is going may spur many to reduce their usage in favor of "the real world"; I agree it might be a good thing. | |
| ▲ | PradeetPatel 2 days ago | parent | prev | next [-] | | It might not be a bad thing if we have an Internet for humans, and a segmented Internet for AI. | | | |
| ▲ | idiotsecant 2 days ago | parent | prev [-] | | No man lands between walled gardens |
|
|
| ▲ | alephnerd 2 days ago | parent | prev | next [-] |
| > If you are young and wanting a promising trade in tech, security would absolutely be a good choice. Shit is going to get CRAZY. Yes, but you can't be a CISSP or SOC monkey - that has no future. You need to be an actual Software Engineer who understands development fundamentals, OS internals, web dev fundamentals, algorithms, etc as well as offensive and defensive concepts. To many "cybersecurity" graduates in North America aren't even qualified to do L1 IT Helpdesk, which is a shame because the IT to Security talent pipeline is critical (along with the SRE, SWE, and ML to security pipeline). |
| |
| ▲ | sdevonoes 2 days ago | parent | next [-] | | As an “actual” software engineer, what do you recommend me to read to work in cybersecurity? Assume I have a solid background in OS internals, algos, networking, software engineering. I have never worked in cybersecurity though (I have never reversed engineered anything) | | |
| ▲ | alephnerd 2 days ago | parent [-] | | What do you specialize in as a SWE? Can you identify architectural or implementation bugs and think about how an attacker can exploit that to laterally move across your environment? Cybersecurity is basically a wholistic architectural review of software that takes business, engineering, and operational context into account to make a qualified judgment about risk. | | |
| ▲ | greenie_beans 2 days ago | parent [-] | | i'm one of these developers who found myself doing a lot of security-oriented devops work. how do i get away from compliance? i hate checking boxes, feels like it creates some pointless work sometimes. compliance alone makes me never want to do cybersecurity but i enjoy the architecture stuff and thinking about threats | | |
| ▲ | alephnerd 2 days ago | parent [-] | | > i hate checking boxes > hate checking boxes, feels like it creates some pointless work sometimes Everyone does. It doesn't actually help reduce tangible risk, but it helps you understand the operational and liability aspect of cybersecurity which is critical as well. > compliance alone makes me never want to do cybersecurity Compliance =/= Cybersecurity. If you work in an organization where security actually means compliance, then leave. --- Honestly, it's region and industry dependent. If you are east coast, transition into a JPMC or GS tier bank (yes, banks are bleeding edge security personas). If you are west coast, it shouldn't be difficult for a SRE/DevOps/Cloud type to become a SWE or Solutions Engineer at a cybersecurity company. If you are in Europe, get an H1B and leave. I literally helped sponsor 2 O-1s today from European cybersecurity founders who wanted to leave becuase of subpar terms and bureaucracy. |
|
|
| |
| ▲ | iainctduncan 2 days ago | parent | prev [-] | | Definitely agree. I guess I should have specified I meant "real programmer who wants a career". ;-) |
|
|
| ▲ | meander_water 2 days ago | parent | prev | next [-] |
| The crazy part is that none of this is unexpected. This was exactly the reason why GPT-2 was restricted for general release in 2019. Check out section 4 - https://cdn.openai.com/GPT_2_August_Report.pdf |
|
| ▲ | RajT88 2 days ago | parent | prev | next [-] |
| Oh, we're back to not being able to trust Google Ads again? I recall there being Malvertising campaign problems ~12-15 years ago or so, and then they seemed to get on top of it. |
| |
|
| ▲ | zakki 2 days ago | parent | prev | next [-] |
| Do you have some pointers to start advancing in security world? |
|
| ▲ | idiotsecant 2 days ago | parent | prev | next [-] |
| How can open source software possibly survive this? |
| |
| ▲ | baq 2 days ago | parent | next [-] | | There’s no closed source software anymore, clankers are mighty good at decompiling. | |
| ▲ | Tepix 2 days ago | parent | prev [-] | | Open source has advantages over closed source: You can demonstrate your sSDLC whereas with closed source you have to believe the vendor. |
|
|
| ▲ | cyanydeez 2 days ago | parent | prev [-] |
| in the upside, the current Adminstration is making most of that legit grift, so investing in homegrown fruad should be on every PE's 2026 wishlist |