| |
| ▲ | msm_ a day ago | parent | next [-] | | >EDR/AV is basically unnecessary, when you only mount things either writable or executable Sounds good, except: * scripting languages exist. The situation is even worse on Linux than on Windows (because of the sysadmin focus). You need at least /bin/sh installed and runnable on any POSIX system. In practice bash, python, perl and many more are also always available. * exploits exist. Just opening a pdf file may execute arbitrary code on a machine. There is no way to avoid that by just configuring your system. And it will happen sooner or later, especially if nation states are involved. The idea that your systems are somehow unhackable because you... mount everything W^X is... not based in reality. Of course it's a great idea, but in practice you need defense in depth, and you need to have a way to Detect and Respond to inevitable Endpoint breaches. I don't love EDR/AVs, but they mitigate real attacks happening in the real world. | |
| ▲ | mapontosevenths a day ago | parent | prev | next [-] | | > the primary problem was accessing it with a Windows mindset. The early Unix systems you're talking about were mainframe based. Modern client-server or p2p apps need an entirely different mindset and a different set of tools that Linux just didnt have the last time I looked. When they audit the company for SOX , PCI-DSS, etc we can't just shrug and say "Nah, we decided we don't need that stuff." That's actually a good thing though, because if it were optional well meaning folks like you just wouldn't bother and the company would wind up on the evening news. | | |
| ▲ | 1718627440 a day ago | parent [-] | | > When they audit the company for SOX, PCI-DSS, Maybe I am missing something, but that seems orthogonal to ensuring host integrity? I didn't argue against logging access and making things auditable, by all means do that. I argued against working against the OS. It is not like integrity protection software doesn't exist for Linux (e.g. Tripwire), it is just different from Windows, since on Windows you have a system where the default way is to let the user control the software and install random things, and you need to patch that ability away first. On Linux software installation is typically controlled by the admin and done with a single file database (which makes it less suitable for home users), but this is exactly what you want on a admin controlled system. Sure, computing paradigms have changed, but it is still a good idea to use OS isolation like not running programs with user rights. | | |
| ▲ | mmooss 18 hours ago | parent | next [-] | | > on Windows you have a system where the default way is to let the user control the software and install random things, and you need to patch that ability away first. That's certainly not the default in a managed corporate environment. Even for home users, Microsoft restricts what you can install more and more. And restrictions are not implemented via patch, but via management capabilities native to the OS, accessed via checkboxes in Group Policy. | |
| ▲ | mapontosevenths a day ago | parent | prev [-] | | I just mean to say that while you absolutely should work to configure the OS to a reasonable baseline of security, you also still need a real EDR product on top of it. Even if security were "solved" in Linux (it's not), it would still often be illegal not to have an EDR and that's probably a good thing. | | |
| ▲ | 1718627440 a day ago | parent [-] | | > you also still need a real EDR product on top of it. Well that's my point. You don't need third-party software messing up with the OS internals, when the same thing can be provided by the OS directly. The real EDR product is the OS. |
|
|
| |
| ▲ | GoblinSlayer a day ago | parent | prev | next [-] | | > And you don't want the users to start random software python ~/my.py wget | bash | | |
| ▲ | 1718627440 a day ago | parent [-] | | I guess you wouldn't install wget in that installation and patch programming languages to follow the executive bit or also remove them. Also you can't make it physically impossible for employees to not e.g. screenshot things and take them home. You can forbid it and try to enforce it, but some amount of trust is needed. Willing action needs to be taken for what it is, an deliberate action by that user. If that user is allowed to access that data, than I don't see what is wrong with him doing that in an automated way. |
| |
| ▲ | mapontosevenths a day ago | parent | prev [-] | | > EDR/AV is basically unnecessary, No, its not and never will be. Even if it were technically unnecessary (in some hypothetical future where privilege escalation became impossible?), legal, compliance, and insurance requirements would still be there. | | |
| ▲ | 1718627440 a day ago | parent [-] | | The problem is that EDR is basically a rootkit, by using it you enable a huge attack surface instead of being able to have stuff e.g. immutable. That tradeoff only makes sense, when you don't trust and control the OS itself. This is more of a problem with proprietary OSes like Windows. Otherwise you would rather integrate this into the OS itself. | | |
| ▲ | mapontosevenths a day ago | parent [-] | | > That tradeoff only makes sense, when you don't trust and control the OS itself. That's totally accurate, but you're missing the fact that we fundamentally don't (and can never) trust the OS or any other part of a general purpose computer. In general purpose computing you have a version of Descartes brain in a vat problem (or maybe Plato's allegory of the cave if you want to go even further back). https://iep.utm.edu/brain-in-a-vat-argument/ To summarize: We can't trust the inputs even if the OS is trusted, and if the OS is trusted can't trust the compiler, and even if we trust the compiler we can't trust the firmware, but even if we trust the firmware we can't trust the chips it runs on, and even if we trust those chips we can't trust the supply chain, etc. "Trust" is fundamentally unsolvable for any Turing machine, because all trust does is move the issue further down the supply chain. I know this all sounds a bit hypothetical, but it's not. I can show you a real world example of every one of those things having been compromised in the past. When there is money or lives at stake people will find a way, and both things are definitely at stake here. So what we have to do is trust, but verify, or at the very least log everything that happens and that's largely what those EDR products exist to do. Maybe we can't stop every attack, even in theory, but we take a crack at it and while we're at it we can log every attack to ensure that we can at least catch it later. There just isn't any version of this world in which general purpose computers don't require monitoring, logging, and exploit prevention. | | |
| ▲ | 1718627440 a day ago | parent [-] | | Sure, that is why you trust a blackbox software from some random company running as a rootkit, whose concrete version you do not even control, because it is remotely updated by them. If you think the hardware works against you, then you are screwed. | | |
| ▲ | mapontosevenths a day ago | parent [-] | | > Sure, that is why you trust a blackbox software from some random company running as a rootkit, whose concrete version you do not even control, because it is remotely updated by them. It doesn't have to be "a random company". Microsoft, for example, now ships EDR as part of the operating system. Many companies prefer other vendors for their own reasons. Sometimes one concern is the exact issue you're describing. By using another vendor outside of MS they can layer the security rather than putting all their eggs in a Microsoft designed basket. We sometimes call that a "security onion" in cyber. I have no idea what the Linux version of that would even look like though. I imagine you'd just choose one of the many 3rd party EDR's from "random companies." It's another reason I asked the original question about how Sysadmins cope with Linux these days. MS has an entire suite of products designed to meet these security, regulatory, and compliance problems. Linux has... file permissions I guess? | | |
| ▲ | 1718627440 a day ago | parent [-] | | If your think of running some EDR software in kernel mode, then my point is indeed don't do that. That just sounds like less security. Use the OS and run the reporting in userspace. If you want integrity, first make everything executable immutable, the system is explicitly designed to work that way. That's why the FHS exists for. Then use something like Tripwire to monitor it. To log access use auditd (https://www.baeldung.com/linux/auditd-monitor-file-access). What else do you need to do? | | |
| ▲ | mapontosevenths a day ago | parent [-] | | > make everything executable immutable How though? Presumably you mean we should trust the OS to do that? Edit to be clear auditd has the same issue. We're trusting it to audit itself. However, we know that we cant trust it because rootkits are a thing. So now what?... I guess we need a tool thats designed to be tamper proof to monitor it. We do that by introducing an external validation. A 2nd external system can vouch that hashes are what we expect, etc. | | |
| ▲ | 1718627440 a day ago | parent | next [-] | | So you have an OS of which you have the source, which is binary reproducible and you can compile yourself if you want to. You want to make that more trustworthy by injecting a random blob, you can not inspect and which updates itself over the network controlled by a third party. I do not understand your threat model. If you think your OS doesn't give you the correct answer to a read, than you need to run a second OS side-by-side and compare. If you think your OS is touching data you haven't told it to, you need to have a layer running below so you can check, i.e. virtualization, BIOS or hardware. If you think your OS is making network calls you haven't told it to, then you need to connect it via an intermediate host, that acts as a firewall. I don't see what injecting a random blob into the OS gives you other than box ticking. Now you need to trust the OS and that other thing. When your attacker gains control of your OS (so actually below root), than you are screwed anyways. Only having some layer independently will help you in that case. Having more code in your OS, won't help you at all, it will just add more attack surface. | | |
| ▲ | mapontosevenths a day ago | parent [-] | | > If you think your OS doesn't give you the correct answer to a read, than you need to run a second OS side-by-side and compare. I mean, that's mostly right. IF the OS is already rootkit infected then installing an EDR won't fix it, as it mostly won't be able to tell that the answers it gets from the OS are incorrect. That's why you'll sometimes see bootable EDR tools used on machines that are suspected of already being compromised. It's a second OS to verify the first, exactly as you describe. In practice that's not typically required because the EDR is usually loaded shortly after the OS is installed, and they're typically built with anti-tamper measures now. So we can mostly just assume that the EDR will be running when the malware is loaded. That allows us to do things like Kernel‑level monitoring for driver loads, module loads, and security‑relevant events (e.g., LSM/eBPF hooks on Linux, kernel callbacks/ETW on Windows). By then layering on some behavioral analysis we can typically prevent the rootkit from installing at all, or at the very least get some logs and alerts sent before it can disable the EDR. It's also one reason these things don't just run in userland as you suggested above. They need kernel mode access to detect kernel mode malware, and they need low level IO access to independently verify that the OS is doing what it says it is when we call an API. Your suggestion reminds me of the old 'chkrootkit' command on Linux. It's a great tool, if you don't already have a rootkit. In that case it just doesn't work. A modern EDR would have prevented the rootkit from installing an API hook in the first place (ideally). > Only having some layer independently will help you in that case. Sometimes it's more about detection, and sometimes it's more about prevention, but both are valuable. I would one day love to see a REAL solution, but for now I think EDR's are the least worst answer we have. A better answer would be a modern OS built to avoid the weaknesses that make these bolt on afterthought solutions necessary, but neither Windows or Linux come anywhere close to being that. They both have too much history and have to preserve compatibility. | | |
| ▲ | 1718627440 15 hours ago | parent [-] | | > A better answer would be a modern OS built to avoid the weaknesses that make these bolt on afterthought solutions necessary That's basically my point. Plugging EDR into an OS, is getting you a different OS that contains a part of which you have only a binary blob, and which is changed by a third-party over the network. This means you need to be able to change parts of the OS over the network, which opens you to new attack surfaces and you now also have the possibility of incompatibilities between the core OS and your blob, since these are developed by different vendors. When you have software, of which you have the source, you control the version, trust the vendor, run this in the kernel and still want to call that EDR, that is fine, but that doesn't seem to be what EDR companies like Crowdstrike are doing. If all you do is use kernel hooks, than you are still trusting the kernel. If your low-level IO still queries things in the kernel, than you still trust the kernel. If low-level IO means below the kernel, than you are not modifying the OS, your "EDR" is the OS and you run another untrusted OS on top. |
|
| |
| ▲ | 1718627440 a day ago | parent | prev [-] | | >> make everything executable immutable > How though? Presumably you mean we should trust the OS to do that? If you don't trust the layer controlling the hardware (aka. the OS) then you need to do that in hardware. |
|
|
|
|
|
|
|
|