| ▲ | giancarlostoro 2 days ago |
| Just commented this elsewhere but my takes on cybersecurity today: Its about to blow up in high demand with so many skiddies being able to hack anybody with an LLM. We are seeing an increase in websites, systems and companies being compromised at an alarming rate. I suspect one of these days we will see a headline of a compromise that will shock and horrify us all. Anyone sleeping on cyber security is a ticking timebomb. Honestly, if you wanted to make a YC company today that targets AI in a meaningdful way, I'd say make it focused on cyber security analysis. ;) |
|
| ▲ | thewebguyd 2 days ago | parent | next [-] |
| > I suspect one of these days we will see a headline of a compromise that will shock and horrify us all But we've had the shock headlines already, and nothing changes. We've seen hospitals get hit that had real-life consequences for patients, the entirety of US citizens SSNs have been breached multiple times now. Passwords as a concept are basically obsolete now. There's even more. That bomb has already been going off. If anything I'm seeing the opposite. Companies are throwing security to the wind to go all in on AgEnTiC AI. If we want change irt cybersecurity, then there needs to start being real consequences for a breach. Not just free credit monitoring. The companies that are proven to be negligent should face actual financial & criminal consequences. |
|
| ▲ | debarshri 2 days ago | parent | prev | next [-] |
| I am building in cybersec space. I dont think you even need script kiddies now. Internal employees run dangerous bad ops with AI that itself is a cybersec nightmare. |
|
| ▲ | evan_a_a 2 days ago | parent | prev | next [-] |
| Whenever I tell people I work in computer security, their first question is "are you worried about AI taking your job"? To which I just laugh and respond "AI is job security" |
| |
| ▲ | giancarlostoro 2 days ago | parent [-] | | It really is! AI will only help you if anything, you aren't worried about AI giving you bad code, just bad answers, which you would validate anyway. I think the other area where AI could be interesting, and I don't hear much buzz about it is, during outages, if it can query all online systems and logs in your cloud, it could probably triage it faster than an entire outage team could in theory anyway. Surprised nobodys built such a system yet. ;) | | |
| ▲ | evan_a_a 2 days ago | parent [-] | | I mean it in the sense that AI security hype and the larger geopolitical environment has woken up a lot of people to the reality that they need to consider security. And the ones that haven't woken up yet will get a wakeup call when they are breached. It also increases the demand for real security expertise, which is already scarce. Also, in my niche (hardware and embedded product security), AI doesn't a have a functional impact to the work except in code analysis, but even that is difficult given the level of abstraction these systems are built at. | | |
| ▲ | giancarlostoro 2 days ago | parent [-] | | That's fair, though even that could just be a matter of time, as people build tools that interface LLMs to the physical world. I wonder how something like Bus Pirate could be used with an LLM (maybe a more powerful version of it?) to grok and poke hardware all over the place. | | |
| ▲ | evan_a_a 2 days ago | parent [-] | | I forsee issues with really getting use out of any commodity language model in the hardware security context, because hardware systems notoriously lack standardization. And often times, the technical knowledge (datasheets, app notes) is locked behind vendor NDAs, or straight up not documented, only existing in the minds of engineers. The implementations of said designs are similarly highly proprietary, with little public "real" systems to train models on. So the issue is two-fold: * The knowledge must be documented and accessible for training. * A bespoke model must be trained this documentation. It is unlikely that both of these things happen in the general model context. Perhaps individual chip vendors will eventually pursue this, but I suspect it is just not a priority for them. | | |
| ▲ | giancarlostoro 2 days ago | parent [-] | | Maybe not for unethical hacking purposes but I am wondering about for reverse engineering think like a game console. |
|
|
|
|
|
|
| ▲ | xnx 2 days ago | parent | prev [-] |
| Do you think that AI helps security offense more than defense? It's not obvious to me that it does. |