| ▲ | Buttons840 5 days ago |
| I say this often, and it's quite an unpopular idea, and I'm not sure why. Security researchers, white-hat hackers, and even grey-hat hackers should have strong legal protections so long as they report any security vulnerabilities that they find. The bad guys are allowed to constantly scan and probe for security vulnerabilities, and there is no system to stop them, but if some good guys try to do the same they are charged with serious felony crimes. Experience has show we cannot build secure systems. It may be an embarrassing fact, but many, if not all, of our largest companies and organizations are probably completely incapable of building secure systems. I think we try to avoid this fact by not allowing red-team security researches to be on the lookout. It's funny how everything has worked out for the benefit of companies and powerful organizations. They say "no, you can't test the security of our systems, we are responsible for our own security, you cannot test our security without our permission, and also, if we ever leak data, we aren't responsible". So, in the end, these powerful organizations are both responsible for their own system security, and yet they also are not responsible, depending on whichever is more convenient at the time. Again, it's funny how it works out that way. Are companies responsible for their own security, or is this all a big team effort that we're all involved in? Pick a lane. It does feel like we're all involved when half the nation's personal data is leaked every other week. And this is literally a matter of national security. Is the nation's power grid secure? Maybe? I don't know, do independent organizations verify this? Can I verify this myself by trying to hack the power grid (in a responsible white-hat way)? No, of course not; I would be committing a felony to even try. Enabling powerful organizations to hide their security flaws in their systems, that's the default, they just have to do nothing and then nobody is allowed to research the security of their systems, nobody is allowed to blow the whistle. We are literally sacrificing national security for the convenience of companies and so they can avoid embarrassment. |
|
| ▲ | pojzon 5 days ago | parent | next [-] |
| Did you see Google or facebook or Miceosoft customer databases breached ? The issue is there is too little repercusions for companies making software in shitty ways. Each data breach should hurt the company approximately to the size of it. Equifax breach should have collapsed the company. Fines should be in tens of billions of dollars. Then under such banhammer software would be built correctly, security would becared about, internal audits would be made (real ones) and people would care. Currently as things stand. There is ZERO reason to care about security. |
| |
| ▲ | lr1970 4 days ago | parent | next [-] | | > The issue is there is too little repercusions for companies making software in shitty ways. The penalty should be massive enough to affect changes in the business model itself. If you do not store raw data it cannot be exfiltrated. | |
| ▲ | slivanes 5 days ago | parent | prev | next [-] | | I’m all for companies to not ignore their responsibility for data management, but I’m concerned that type of punishment could be used as a weapon against competitors. I can imagine that certain classes of useful companies would just not be able to exist. Tricky balance to make companies actually care without crippling insurance. | |
| ▲ | arvinsim 4 days ago | parent | prev | next [-] | | I agree. When it becames penalized by law, project owners/managers won't be tempted to take shorcuts and will have the incentive to give developers more time to focus on security. | |
| ▲ | Xx_crazy420_xX 4 days ago | parent | prev | next [-] | | There is some incentive to leave 0days in customer software, as it creates a commodity to be sold on gray 0day markets. On the other hand, securing your own garden brings less value then covering and deneing that your 'secure' cloud platform was whacked. | |
| ▲ | conception 5 days ago | parent | prev | next [-] | | Microsoft lost their root keys to Azure. ¯\_(ツ)_/¯ | | | |
| ▲ | reactordev 5 days ago | parent | prev | next [-] | | We need both. The allowance by law enforcement to do cyber security as well as engineers not writing shitty software and lax IAM permissions or exposing private keys or the myriad of ways they mess up. | |
| ▲ | bobmcnamara 5 days ago | parent | prev | next [-] | | > Did you see Google or facebook or Miceosoft customer databases breached ? Are you being facetious? Yes, yes, yes, they have. | | | |
| ▲ | Den_VR 5 days ago | parent | prev | next [-] | | I’m curious. What do you think about legalizing “hack-back” ? | | |
| ▲ | red-iron-pine 4 days ago | parent | next [-] | | not a solution | |
| ▲ | clown_strike 4 days ago | parent | prev [-] | | Given how many attacks are false flags conducted through proxies this would be disastrous. However, open intermediary victims up to contributory lawsuits and everyone will have to take security more seriously. Think twice before you connect that new piece of shit IoT device. |
| |
| ▲ | GlacierFox 5 days ago | parent | prev | next [-] | | Didn't Sharepoint get hacked the other day? :S | | |
| ▲ | jaynate 5 days ago | parent [-] | | Yes, but those were on-prem deployments of Sharepoint, not Microsoft's infratructure. | | |
| ▲ | Spooky23 5 days ago | parent | next [-] | | Many of those deployments were there because Microsoft can’t deliver the required assurance level! | |
| ▲ | samplatt 5 days ago | parent | prev | next [-] | | It was for ALL on-prem deployments. This wasn't due to the user being insecure, this was Microsoft's fault. If anything it's yet another point AGAINST them - if they can't guarantee secure software without the caveat of running on a closed hardware black box then it's not secure software. | |
| ▲ | sugarpimpdorsey 5 days ago | parent | prev [-] | | Is the non-defective software only available in the SaaS version? |
|
| |
| ▲ | tempnew 5 days ago | parent | prev [-] | | Microsoft just compromised the National Nuclear Security Administration last week. Facebook was breached what last month? Google is an ad company. They can’t sell data that’s breached. They basically do email, and with phishing at epidemic levels, they’ve failed the consumer even at that simple task. All are too big to fail so there is only congress to blame. While people like Rho Khana focus their congressional resources on the Epstein intrigue citizens are having their savings stolen by Indian scammers and there is clearly no interest and nothing on the horizon to change that. | | |
| ▲ | gruez 5 days ago | parent | next [-] | | >Facebook was breached what last month? source? A quick search suggests the "breach" is a bunch of credentials that got harvested/phished got leaked, not that facebook themselves got breached. >Google is an ad company. They can’t sell data that’s breached. They basically do email, and with phishing at epidemic levels, they’ve failed the consumer even at that simple task. In other words, they haven't been breached, but you still think they're bad people. | | |
| ▲ | tempnew 5 days ago | parent [-] | | To me, Facebooks’ entire business model seems like spyware and selling personal info to third parties. Whether people at such companies are good or bad is not at issue. I assume most people everywhere are good people. But are the companies themselves “good”? Microsoft and Google maybe, certainly in the past (Google wave was very innovative). But Facebook? The context was privacy and people being victimized by Indian scammers. We know those scammers use Facebook to gather info and target victims, all without any actual breach taking place. To me, not having a breach does not make Facebook “good”. | | |
| ▲ | gruez 4 days ago | parent [-] | | >To me, Facebooks’ entire business model seems like spyware and selling personal info to third parties. "seems like" is doing a lot of the heavy lifting here. I'm not aware of instances where facebook was "selling personal info to third parties". It does use personal info to sell ads to third parties, but characterizing that as "selling personal info" is a stretch. >We know those scammers use Facebook to gather info and target victims, all without any actual breach taking place. This just sounds like "scammers are viewing public facebook profiles and using facebook messenger to communicate with victims", in that case I'm not sure how facebook deserves flak here. |
|
| |
| ▲ | reactordev 5 days ago | parent | prev [-] | | Agree. Google is buying the data for ads and ad brokerages. Don’t kid yourself. They may use a 3rd party to distance themselves but they definitely buy the data. |
|
|
|
| ▲ | atmosx 5 days ago | parent | prev | next [-] |
| If companies faced real consequences, like substantial fines from a regulatory body with the authority to assess damage and impose long-term penalties, their stock would take a hit. That alone would compel them to take security seriously. Unfortunately, most still don’t. More often than not, they walk away with a slap on the wrist. If, that. |
| |
| ▲ | no_wizard 5 days ago | parent [-] | | I’ve still got time left on identify theft protection I’ve been given for free due to breaches | | |
| ▲ | rswail 4 days ago | parent [-] | | That's not a fine, that's a tax deductible (and probably insurable) expense for the company. It's literally part of their COGS. |
|
|
|
| ▲ | kube-system 5 days ago | parent | prev | next [-] |
| Not all security research is the same. There’s a lot of room for nuance in this discussion. I think there’s a lot of things that many people would agree should be protected. For instance, people who report vulnerabilities they just happen to stumble upon. But on the other end of the spectrum, there are a lot of pen testing activities that are pretty likely to be disruptive. And some of them would be disruptive, even on otherwise secure systems, if we gave the entire world carte blanche to perform these activities. There are certainly some realms of security where technology can solve anything, like cryptographic algorithms. But at the interface of technology and society, security still highly relies on the rule of law and living in a high trust society. |
|
| ▲ | pengaru 5 days ago | parent | prev | next [-] |
| > I say this often, and it's quite an unpopular idea, and I'm not sure why.
>
> Security researchers, white-hat hackers, and even grey-hat hackers should have
> strong legal protections so long as they report any security vulnerabilities
> that they find.
>
> The bad guys are allowed to constantly scan and probe for security
> vulnerabilities, and there is no system to stop them, but if some good guys
> try to do the same they are charged with serious felony crimes.
So let me get this straight, you want to give unsuccessful bad actors an escape
hatch by claiming white-hat intentions when they get caught probing systems? |
| |
| ▲ | doubled112 5 days ago | parent | next [-] | | What about a white hat hacker license? Not sure what the criteria would be, but could it be done? Then there would be some sort of evidence the guy was a "good guy". Like when a cop shoots your dog and suffers no consequences. | | |
| ▲ | red-iron-pine 4 days ago | parent [-] | | there are things like OSCP and CISSP which require a background check and sponsor. there is a reason they are popular for security roles. that's not the same as a white-hat license but it shows that you registered, that you made it clear where you're at, and that you've had some minimum ethical and professional training. |
| |
| ▲ | worthless-trash 5 days ago | parent | prev | next [-] | | This is a horrifically bad take, I know you probably see it this way because you can't imagine how easy some of these mistakes are, however I can assure you that there are MANY TIMES that I've accidentally found issues with systems. I do work in security, the average person would write this off as "oh just shitty software" and do nothing about it, however when one know what the error means and you know how the software works, errors are easy to turn into exploitable systems. I once had a bank account that fucked up data validation because i had '; in the transfer description of 120 characters. Immediately abusable sql injection. After my first time reporting this OBVIOUS flaw to a bank along with how it can be abused in both database modification and xss injection, I had to visit the local law enforcement with lawyers because they believe that 'hacking' had taken place. I now report every vuln behind fake emails, on fake systems in non extradition countries accessed via proxy on vpn. Even then I have the legal system attempting to find my real name and location and threaten me with legal action. Bad actors come from non extradition countries which wouldnt even TALK to you about the problem, You'd just have to accept you get hacked and that is the end of the situation. Its people like yourself who can't see past the end of their nose to realise where the real threats are. You don't have "it straight". | | |
| ▲ | wahern 4 days ago | parent | next [-] | | > This is a horrifically bad take I took it as a take on the face of the proposal: "hackers should have strong legal protections so long as they report any security vulnerabilities that they find." As stated, it's ripe for abuse. Perhaps they could have been more charitable and assumed some additional implicit qualifiers. But defining those qualifiers is precisely the difficult part, perhaps intractably difficult. In the US private investigators often require a license to work, but AFAIU that license doesn't actually exempt them from any substantive laws. Rather, it's more a mechanism to make it easier for authorities and citizens to excuse (outside the legal process) otherwise suspicious behavior. Rather than give special protections to a certain class of people, why not define the crimes to not encompass normal investigative behaviors typical in the industry. In particular, return to stronger mens rea elements rather than creeping in the direction of strict liability. Adding technical carveouts could end up making for a harsher system; for example, failing to report in an acceptable manner (when, what, where, how?) might end up sealing the fate of an otherwise innocent tech-adept person poking around. | | |
| ▲ | worthless-trash 4 days ago | parent [-] | | > Rather than give special protections to a certain class of people, why not define the crimes to not encompass normal investigative behaviors typical in the industry. This would be an acceptable alternative, and may even be workable. > failing to report in an acceptable manner (when, what, where, how?) might end
> up sealing the fate of an otherwise innocent tech-adept person poking around. You've hit exactly the problem, I feel like you too might be working in this area. Not many people come to this kind of logical conclusion. |
| |
| ▲ | pengaru 4 days ago | parent | prev [-] | | Companies will care about securing their systems and paying for these services if it costs them Real Money when they neglect to do so. Until then, they'll continue to not care. The solution is not a legal framework presuming good samaritans will secure the networks and systems of the world. |
| |
| ▲ | Buttons840 5 days ago | parent | prev [-] | | If we did give bad actors an escape hatch, what harm would it do in a world already filled with untouchable bad actors? |
|
|
| ▲ | gettingoverit 5 days ago | parent | prev | next [-] |
| Probably this wouldn't be a problem if Web was somewhat anonymous, so that merely stumbling upon a security issue, or using website in a regular way would not constitute a crime for the lack of the person to put that crime onto. Also if things stored in those databases weren't plain strings, but tokens (in asymmetric cryptography sense) so that only the service owns it, and in case of a leak user can use it to get a payout from the service, this problem would be solved. But no business is interested in provably making their users secure, it would be a self-sabotage. It's always just a security theater. |
|
| ▲ | jama211 4 days ago | parent | prev | next [-] |
| It’s an interesting point, but doesn’t that open up an easy defense so black hat hackers can hack anything they want in advance and as long as they say they were just “looking for an opening” they’d be legally safe under this scenario? They could plausibly claim they just never found a vulnerability to report, but they could note down anything they notice and then attack who or when they feel like it - or pretend they’re white hat their while career but secretly sell the methods to someone who will. Under the current system, they’re discouraged from doing that. |
|
| ▲ | saurik 5 days ago | parent | prev | next [-] |
| > ...these powerful organizations are both responsible for _____, and yet they also are not responsible, depending on whichever is more convenient at the time... This pattern comes up constantly, and it is extremely demoralizing. |
|
| ▲ | thatguy0900 5 days ago | parent | prev | next [-] |
| I mean, the problem is people will break things. How do you responsibly hack your local electric grid? What if you accidentally mess with something you don't understand, and knock a neighborhood out? How do we prove you just responsibly hacked into a system full of private information then didn't actually look at a bunch of it? |
| |
| ▲ | Buttons840 5 days ago | parent | next [-] | | If a security researcher knocks out the power grid of a town, we should consider ourselves lucky that the vulnerability was found before an opposing nation used it to knock out the power of many towns. | | |
| ▲ | kube-system 5 days ago | parent [-] | | The ideal scenario is that a responsible security engineer finds the problem, and doesn’t cause a power outage. Power outages aren’t necessarily just an inconvenience, they can cause serious economic damage, and kill people. I think there’s a better solution somewhere in between doing nothing, and letting bumbling idiots recklessly fool with things they shouldn’t be messing with. | | |
| ▲ | Buttons840 5 days ago | parent [-] | | Sure, we should avoid people purposely doing harmful things, but they should be given the benefit of the doubt unless it can be proven they were intentionally doing harm beyond just testing the security. One thing that is not a good option is the status-quo we're discussing here, in which a "bumbling idiot" can take down a city power grid. If that's how things are, the we shouldn't cower and hope we remain safe from every idiot out there, we need to shake things up and find the problems now. Hopefully without actually taking out any power grid. | | |
| ▲ | kube-system 4 days ago | parent [-] | | People accidentally doing harm can cause significant problems too -- that's why many professions require licensing and we don't let random people practice medicine, even if they have good intentions. The problem here is that most security testing is not just the hollywood narrative of "some people running nmap and finding critical vulnerabilities that take down the power grid". Plenty of the real-world security vulnerabilities in large-scale systems that do exist are at the interface between technology and humans, and those are the vulnerabilities that computer science often can't reasonably fix: social engineering, trust systems, physical-layer exploits, etc. In securing any large system, there are going to be many low-impact issues that do exist but aren't necessarily important (or even desirable) to fix because the impact to fix them is too high, and the likelihood of exploit is low because it is impractical as an attack vector. But legalizing the exploit of these edge cases would guarantee you'd see issues, because you're creating a financial opportunity where there was previously not one. For example: we don't need to incentivize a wave of thousands of script kiddies fiddling with their power meters, trying to social engineer support staff, running DoS scripts against the public website, etc. Those things aren't helpful in improving critical infrastructure, they're just going to cause a nuisance and make things difficult for people. | | |
| ▲ | Buttons840 4 days ago | parent [-] | | DDoS is not valid security research, it's just destruction. Also, we need to clarify the scenario because you said: > the likelihood of exploit is low but you also mention the need to stop people "accidentally" exploiting the system, so which is it? A system that can be accidentally broken by bumbling idiots does not deserve protection IMO. | | |
| ▲ | 4 days ago | parent | next [-] | | [deleted] | |
| ▲ | kube-system 4 days ago | parent | prev [-] | | > DDoS is not valid security research, it's just destruction. I didn't say anything about DDoS in my comment. DoS is a term referring to a loss of availability. Availability is one of the three fundamental parts to the CIA triad, so yes, it is absolutely something security researchers evaluate. > Also, we need to clarify the scenario because you said: > the likelihood of exploit is low > but you also mention the need to stop people "accidentally" exploiting the system, so which is it? I said "accidentally doing harm". For a real world exploit to happen, you have to have a couple of different things align. First, you need a vulnerability. Second, you need some way that somebody could exploit that vulnerability. Third, you need a reason that somebody's going to do it. A vulnerability simply existing isn't enough to make it a problem. Now, in an academic lab environment, most people don't really care about the likelihood of exploit or the motivations of an attacker. Because the point is academic computer science. But the people who secure systems in the real world have to care about the likelihood of exploitations in the motivations of their attackers. Because it's not possible to secure everything in a production environment, where you also have to ensure the availability of the system and the usability of the system to your stakeholders. You always have to make a compromise between the two. So, in the real world: the locality of the attacker, the legal environment, and the impact of the exploit all play very significant roles in how someone might weigh a significance of an exploit. To make up a contrived example: Let's say that all I have to do to cancel electricity service, create an online account using the information from a power bill, and press the cancel button. There's an obvious exploit here. I could dig through my neighbor's trash, get a copy of their bill, create an account, and shut off their power. Do we wanna legalize this activity? No, I don't think so. Are we at risk of a nation state exploiting this? No, probably not because they don't have access to everyone's trash everywhere. Also, you couldn't really do this at scale because it would be obviously not intended. Should we require more authentication just to say we've plugged the hole? Also, probably not. Electricity service has to be accessible to people. We can't require onerous authentication, when many of the customers may be elderly, disabled, etc. Instead, we as a society solve this problem by making this activity a crime. In this works just fine, because anyone who has physical access is already in that legal jurisdiction as well. I'm sure you can imagine dozens of other similar scenarios. The point is that information security is a lot more complicated than just adding authentication to a webpage. Information security isn't a technology problem. It's a people using technology problem. I don't think we want to legalize activity similar to what is in my above scenario. That's the kind of situation where people may be accidentally causing harm that they wouldn't be doing now, because they would go to jail. But if you legalize it, people are going to do it in an attempt to monetize it. |
|
|
|
|
| |
| ▲ | sunrunner 5 days ago | parent | prev | next [-] | | > How do we prove you just responsibly hacked into a system full of private information then didn't actually look at a bunch of it? Pinky promise? | |
| ▲ | sublinear 5 days ago | parent | prev [-] | | If we're strictly talking about software there should be some way to test in a staging environment. Production software that cannot be run this way should be made illegal. |
|
|
| ▲ | AbstractH24 4 days ago | parent | prev | next [-] |
| At what point do we need to treat this data like one’s health data? The risks associated with medical malpractice certainly slows the pace of innovation in healthcare, but maybe that’s ok. |
|
| ▲ | msgodel 5 days ago | parent | prev | next [-] |
| The internet is really a lot like the ocean, things left unmaintained on it are swallowed by waves and sea life. We need something like the salvage law. |
|
| ▲ | Ylpertnodi 5 days ago | parent | prev | next [-] |
| > I say this often, and it's quite an unpopular idea, and I'm not sure why.
> Etc...etc...etc.... Me, neither, if that helps. |
|
| ▲ | bongodongobob 5 days ago | parent | prev | next [-] |
| No. You cannot come to my home or business while I'm away and try to break in to protect me unless I ask, full stop. Same goes for my servers and network. It's my responsibility, not anyone else's. We have laws in place already for burgers and hackers. Just because they continue to do it doesn't give anyone else the right to do it for the children or whatever reasoning you come up with. |
| |
| ▲ | krior 5 days ago | parent | next [-] | | But you would like to be notifiedby your neighbours if you have left your window open while away, right? Or are you going to sue them for attempted break-in? The issue is not that its illegal to put on a white hat, break into the user database and steal 125 million accounts as proof of security issue. The problem is people getting sued for saying "Hey, I stumbled upon the fact that you can log into any account by appending the account-number to the url of your website.". There certainly is a line seperating ethical hacking (if you can even call it hacking in some cases) and prodding and probing at random targets in the name of mischief and chaos. | | |
| ▲ | wahern 4 days ago | parent [-] | | Analogy with the physical world falls apart here. Few people would want to enshrine an exemption from trespassing someone walking house-to-house jiggling door handles and pushing on windows to see what's unlocked. If anything you may want to make it an explicit crime to do it systematically, as opposed to "targeting" a neighbor's house. In fact, I think this constitutes prowling, which is a crime in many places. But for white-hat hacking you want prowling. And it's very difficult to create technical definitions that productively distinguish "good" prowlers from "bad" prowlers. So why even try to draw a distinction between types of prowlers? Maybe prowling information systems online shouldn't be a crime at all, given the nature of information systems. |
| |
| ▲ | tjwebbnorfolk 5 days ago | parent | prev | next [-] | | Adding "full stop" doesn't strengthen your case, it just makes it sound like you are boiling the world down to be simple enough for your case to make any sense. There are a lot of shades of grey that you are ignoring. | |
| ▲ | Buttons840 5 days ago | parent | prev | next [-] | | You claim sole responsibility. Do you accept sole legal and financial liability? I think allowing red-teams to run wild is a better solution, but I can agree with other solutions too. If those who claim sole responsibility want to be responsible, I'm okay with that too. I really just want us to pick a lane. So again, are you willing to accept sole legal and financial liability? | |
| ▲ | cmiles74 5 days ago | parent | prev | next [-] | | It seems like passing legislation that imposes harsher penalties for data breaches is the way to go. | |
| ▲ | fancyswimtime 5 days ago | parent | prev [-] | | username checks out |
|
|
| ▲ | valianteffort 5 days ago | parent | prev | next [-] |
| > Experience has show we cannot build secure systems It's an unpopular idea because its bullshit. Building secure systems is trivial and at the skill level of a junior engineer. Most of these "hacks" are not elaborate attacks utilizing esoteric knowledge to discover new vectors. They are the same exploit chains targeting bad programming practices, out of date libraries, etc. Lousy code monkeys or medicore programmers are the ones introducing vulnerabilities. We all know who they are. We all have to deal with them thanks to some brilliant middle manager figuring out how to cut costs for the org. |
| |
| ▲ | 9dev 5 days ago | parent | next [-] | | That sounds like a perspective from deep in the trenches. A software system has SO many parts, spanning your code, other people’s code, open source software, hardware appliances, SaaS tools, office software, email servers, and also humans reachable via social engineering. If someone makes a project manager click a link leading to a fake Jira login, and the attacker uses the credentials to issue a Jira access token, and uses that to impersonate the manager to create an innocuous ticket, and a low-tier developer introduces a subtle change in functionality that opens up a hole… then you have an insecure system. This story spans a lot of different concerns, only few of which are related to coding skills. Building secure software means defending in breadth, always, not fucking up once, against an armada of bots and creative hackers that only need to get lucky once. | |
| ▲ | darzu 5 days ago | parent | prev | next [-] | | Take a broader view of what "building secure systems" means. It's not just about the code being written by ICs but about the business incentives, tech choices of leadership, the individual ways execs are rewarded, legacy realities, interactions with other companies, and a million other things. Our institutions are a complex result of all of these forces. Taken as a whole, and looking at the empirical evidence of companies and agencies frequently leaking data, the conclusion "we cannot build secure systems" is well founded. | | |
| ▲ | wonderwonder 5 days ago | parent [-] | | This is accurate. Especially in shops that implement firm shipping dates for Product Increments. You have X weeks to build Y features consisting of Z tickets.
At the end of those X weeks you better have all your tickets done. So more often than not, the tickets are done and the features are implemented. Shops like this build incredible ticket closing machines. They are implemented to pass user acceptance testing not to hold back hackers or bad actors. When leadership incentivizes delivering features and a developers job or raise depends on delivering those features, you get what you incentivize. |
| |
| ▲ | KaiserPro 5 days ago | parent | prev | next [-] | | > Building secure systems is trivial I'd suggest you try and build a secure system for > 150k employees before you make sweeping statements like that. | |
| ▲ | tdrz 5 days ago | parent | prev | next [-] | | Sometimes it is the management that doesn't understand anything. In their perspective, security doesn't improve the bottom line. I worked for an SME that dealt with some sensitive customer data. I mentioned to the CEO that we should invest some time in improving our security. I got back that "what's the big deal, if anyone wants to look they can just look..." | |
| ▲ | plst 5 days ago | parent | prev | next [-] | | Looking at the number of already discovered vulnerabilities in popular applications, I would say it's actually impossible to build secure systems right now. Even companies that are trying are failing.
IMO it's still way too easy to introduce a vulnerability and then miss it in both review and pentests.
We need big changes in all parts of the software buliding and maintaining process. Probably no one will like that, because we are still in "move fast and break things" software development age. | |
| ▲ | sublinear 5 days ago | parent | prev | next [-] | | This is true, but what's even more interesting is all the things that had to fail long before you had a shop full of monkeys. | |
| ▲ | bloqs 5 days ago | parent | prev | next [-] | | i used to agree with you but i feel its naive. incompetence is always guaranteed | |
| ▲ | Buttons840 5 days ago | parent | prev [-] | | You're saying that creating secure systems is easy. I'm not sure which is worse: 1) Creating secure systems is hard, and we often fail at it. 2) Creating secure systems is easy, and we often fail at it. I don't know which is worse, but I know for sure we often fail at it. |
|
|
| ▲ | sugarpimpdorsey 5 days ago | parent | prev [-] |
| Do you think we should have strong legal protections for people who go around your neighborhood trying unlocked car doors and opening front doors (with a backpack full of burglary tools) and when confronted claim they're uh doing it for your security? |
| |
| ▲ | xboxnolifes 5 days ago | parent | next [-] | | The great thing about analogies is that they're just analogies. We can have different laws for different things. Cybersecurity vs physical security. | | |
| ▲ | sugarpimpdorsey 5 days ago | parent [-] | | Hey your front door was unlocked where is my bug bounty? Some people still live in places where you can leave your doors unlocked and not worry. Leave it to the tech industry to bring Internet of Shit locks to your doorstep. Would you be upset if in the course of their unsolicited work, these white/grey hats found your wife's nudes in the digital equivalent of kicking over a rock? Full legal protection of course. Ignore if they kept a copy for themselves for later use, they promised to delete them <wink>. | | |
| ▲ | speff 5 days ago | parent | next [-] | | If everyone in the world is able to check if my door is locked and enter if not, yes I will give a bounty to someone who politely tells me that it's unlocked. Cybersecurity vulns are in a different class of exploitability from physical vulns. | |
| ▲ | user_7832 5 days ago | parent | prev | next [-] | | > Hey your front door was unlocked where is my bug bounty? If you own a property where a million people live, that might not be a bad idea at all. | |
| ▲ | 5 days ago | parent | prev [-] | | [deleted] |
|
| |
| ▲ | Buttons840 5 days ago | parent | prev | next [-] | | That's a failed analogy I won't entertain. You're trying to say companies should have sole responsibility over their systems. I say, let them have sole legal and financial liability as well then. | |
| ▲ | tempnew 4 days ago | parent | prev | next [-] | | We do? People can go into any neighborhood they want. They can’t break laws, but the law allows them to walk around and look for open windows, knock on front doors, take photos, scan WiFi bssid, note cars and license plate info, etc… The crime here is the tech. The companies aren’t to blame. Programmers and tech companies are. If there was no internet or “tech industry” we’d all be so much better off it’s painful to even contemplate. | |
| ▲ | Sytten 5 days ago | parent | prev [-] | | That comment came straight from the 2001. Seriously the world has moved on from hackers == bad, but the legislation has not and it is time it changes. | | |
| ▲ | sugarpimpdorsey 5 days ago | parent [-] | | Yes, the world has changed, infosec has morphed from a niche industry with few experts, to a full-on grift where anyone that jiggles your door handle feels they are owed something in return. "Get your degree in cybersecurity" has become the 2025 equivalent of TV/VCR repair. |
|
|