Remix.run Logo
mapontosevenths a day ago

Its been a very long time since I was a Sysadmin, but I'm curious what managing a fleet of Linux desktops is like today? Has it vastly improved?

When I last tried in a small pilot program, it was incredibly primitive. Linux desktops were janky and manual compared to Active Directory and group policy, and an alternative to Intune/AAD didn't even seem to exist. Heck, even things like WSUS and WDS didnt seem to have an open version or only had versions that required expensive expert level SME'S to perform constant fiddling. Meanwhile the Windows tools could be managed by 20 year old admins with basic certitifcations.

Also, GRC and security seemed to be impossible back then. There was an utter lack of decent DLP tools, proper legal hold was difficult, EDR/AV solutions were primitive and the options were limited, etc.

Back then it was like nobody who had ever actually been a sysadmin had ever taken an honest crack at Linux and all the hype was coming from home users who had no idea what herding boxen was actually like.

Nextgrid a day ago | parent | next [-]

This is my concern with all those "success" stories about Linux as an enterprise desktop OS. Run it for 10 years and show me the actual cost savings/improved productivity.

Microsoft is trash and is getting worse day by day, but at the very least it's the same trash everyone has to deal with, so people mostly got used to the smell, and you can get economies of scale in tools used to deal with said smell. MS is trash because of incompetence.

Linux is dozens of different flavors of trash, so you don't even get economies of scale dealing with it. It's trash because of ideology - the people involved would often reject the functionality you mentioned for ideological reasons, and even for those who do accept them, won't agree on the implementation meaning you now have a dozen of different flavors, and will take up arms if someone tries to unify things (just look at the reaction to systemd).

Linux works well for careers where shoveling trash is already part of your work, in which case all the effort doubles as training for the job and experience makes this a non-issue. But for non-IT careers where the computer is just a tool that is expected to work properly, it's nowhere near there, and will never get there because everyone's instead arguing on the definition of "there" and which mode of transportation to use getting there.

morshu9001 a day ago | parent | next [-]

Google gave its employees a Linux laptop option for well more than 10 years, but in the past few years they started steering everyone away from it, before formally announcing they want to scale it back.

This is despite them being a tech company, and despite them having already invested in their single Linux flavor (gLinux). Wayland migration was also a pain.

Lapel2742 a day ago | parent | next [-]

I'm not an expert and that still might be the case but you have to understand that for many Microsoft as an American company is simply no longer an option for critical infrastructure. It's a matter of trust.

pjmlp a day ago | parent | prev [-]

Most companies that I know that allow employees to use Linux laptops, IT washes their hands of any kind of support.

While anyone with macOS or Windows laptops can open support tickets, the hardcore Linux users get invited to join internal forums to help themselves.

Thus naturally one needs to be really into it, especially when dealing with software that doesn't even exist.

So we get our IT supported systems and run GNU/Linux either on servers or VMs.

I sense only if there are changes imposed at governments level, would companies change their stance on this.

morshu9001 4 hours ago | parent [-]

That can work, but it creates other kinds of problems in some companies. The point of the IT dept is to avoid spending engineer time on fixing random laptop issues, and also to deal with the monitoring software that has to support every OS the employees use.

a day ago | parent | prev [-]
[deleted]
1718627440 a day ago | parent | prev | next [-]

I think this comes primarily from trying to add a separate management tool on top, instead of leveraging the OS structure themself. There is a reason, why most directories are specified to be readonly. Also writable XOR persistent is mostly true. The only things required to be writable are /tmp, /var and /home. /tmp is wiped at least on every boot or is even just a ramdisk. /var can be cached or reset to the predefined settings on boot. /home needs to be managed, that is true. But you wouldn't want every users directory on every host anyway, instead you want to populate them on login. That is typically done by libpam.

/usr is expected to be shared among hosts, host-specific stuff goes into /usr/local for a reason, and as a sysadmin you can decide to simply not have host specific software.

EDR/AV is basically unnecessary, when you only mount things either writable or executable. And you don't want the users to start random software or mount random USB-sticks anyway.

> Back then it was like nobody who had ever actually been a sysadmin had ever taken an honest crack at Linux and all the hype was coming from home users who had no idea what herding boxen was actually like.

Unix has over 50 years of history of being primarily managed by sysadmins instead of home users. While Linux is not Unix, it has inherited a lot. The whole system is basically designed to run a bunch of admin configured software and is actually less suitable for home users. I would say the primary problem was accessing it with a Windows mindset.

msm_ a day ago | parent | next [-]

>EDR/AV is basically unnecessary, when you only mount things either writable or executable

Sounds good, except:

* scripting languages exist. The situation is even worse on Linux than on Windows (because of the sysadmin focus). You need at least /bin/sh installed and runnable on any POSIX system. In practice bash, python, perl and many more are also always available.

* exploits exist. Just opening a pdf file may execute arbitrary code on a machine. There is no way to avoid that by just configuring your system. And it will happen sooner or later, especially if nation states are involved.

The idea that your systems are somehow unhackable because you... mount everything W^X is... not based in reality. Of course it's a great idea, but in practice you need defense in depth, and you need to have a way to Detect and Respond to inevitable Endpoint breaches. I don't love EDR/AVs, but they mitigate real attacks happening in the real world.

mapontosevenths a day ago | parent | prev | next [-]

> the primary problem was accessing it with a Windows mindset.

The early Unix systems you're talking about were mainframe based. Modern client-server or p2p apps need an entirely different mindset and a different set of tools that Linux just didnt have the last time I looked.

When they audit the company for SOX , PCI-DSS, etc we can't just shrug and say "Nah, we decided we don't need that stuff." That's actually a good thing though, because if it were optional well meaning folks like you just wouldn't bother and the company would wind up on the evening news.

1718627440 a day ago | parent [-]

> When they audit the company for SOX, PCI-DSS,

Maybe I am missing something, but that seems orthogonal to ensuring host integrity? I didn't argue against logging access and making things auditable, by all means do that. I argued against working against the OS.

It is not like integrity protection software doesn't exist for Linux (e.g. Tripwire), it is just different from Windows, since on Windows you have a system where the default way is to let the user control the software and install random things, and you need to patch that ability away first. On Linux software installation is typically controlled by the admin and done with a single file database (which makes it less suitable for home users), but this is exactly what you want on a admin controlled system.

Sure, computing paradigms have changed, but it is still a good idea to use OS isolation like not running programs with user rights.

mmooss 18 hours ago | parent | next [-]

> on Windows you have a system where the default way is to let the user control the software and install random things, and you need to patch that ability away first.

That's certainly not the default in a managed corporate environment. Even for home users, Microsoft restricts what you can install more and more.

And restrictions are not implemented via patch, but via management capabilities native to the OS, accessed via checkboxes in Group Policy.

mapontosevenths a day ago | parent | prev [-]

I just mean to say that while you absolutely should work to configure the OS to a reasonable baseline of security, you also still need a real EDR product on top of it.

Even if security were "solved" in Linux (it's not), it would still often be illegal not to have an EDR and that's probably a good thing.

1718627440 a day ago | parent [-]

> you also still need a real EDR product on top of it.

Well that's my point. You don't need third-party software messing up with the OS internals, when the same thing can be provided by the OS directly. The real EDR product is the OS.

GoblinSlayer a day ago | parent | prev | next [-]

> And you don't want the users to start random software

python ~/my.py

wget | bash

1718627440 a day ago | parent [-]

I guess you wouldn't install wget in that installation and patch programming languages to follow the executive bit or also remove them.

Also you can't make it physically impossible for employees to not e.g. screenshot things and take them home. You can forbid it and try to enforce it, but some amount of trust is needed.

Willing action needs to be taken for what it is, an deliberate action by that user. If that user is allowed to access that data, than I don't see what is wrong with him doing that in an automated way.

mapontosevenths a day ago | parent | prev [-]

> EDR/AV is basically unnecessary,

No, its not and never will be.

Even if it were technically unnecessary (in some hypothetical future where privilege escalation became impossible?), legal, compliance, and insurance requirements would still be there.

1718627440 a day ago | parent [-]

The problem is that EDR is basically a rootkit, by using it you enable a huge attack surface instead of being able to have stuff e.g. immutable. That tradeoff only makes sense, when you don't trust and control the OS itself. This is more of a problem with proprietary OSes like Windows. Otherwise you would rather integrate this into the OS itself.

mapontosevenths a day ago | parent [-]

> That tradeoff only makes sense, when you don't trust and control the OS itself.

That's totally accurate, but you're missing the fact that we fundamentally don't (and can never) trust the OS or any other part of a general purpose computer.

In general purpose computing you have a version of Descartes brain in a vat problem (or maybe Plato's allegory of the cave if you want to go even further back).

https://iep.utm.edu/brain-in-a-vat-argument/

To summarize: We can't trust the inputs even if the OS is trusted, and if the OS is trusted can't trust the compiler, and even if we trust the compiler we can't trust the firmware, but even if we trust the firmware we can't trust the chips it runs on, and even if we trust those chips we can't trust the supply chain, etc. "Trust" is fundamentally unsolvable for any Turing machine, because all trust does is move the issue further down the supply chain.

I know this all sounds a bit hypothetical, but it's not. I can show you a real world example of every one of those things having been compromised in the past. When there is money or lives at stake people will find a way, and both things are definitely at stake here.

So what we have to do is trust, but verify, or at the very least log everything that happens and that's largely what those EDR products exist to do. Maybe we can't stop every attack, even in theory, but we take a crack at it and while we're at it we can log every attack to ensure that we can at least catch it later.

There just isn't any version of this world in which general purpose computers don't require monitoring, logging, and exploit prevention.

1718627440 a day ago | parent [-]

Sure, that is why you trust a blackbox software from some random company running as a rootkit, whose concrete version you do not even control, because it is remotely updated by them.

If you think the hardware works against you, then you are screwed.

mapontosevenths a day ago | parent [-]

> Sure, that is why you trust a blackbox software from some random company running as a rootkit, whose concrete version you do not even control, because it is remotely updated by them.

It doesn't have to be "a random company". Microsoft, for example, now ships EDR as part of the operating system.

Many companies prefer other vendors for their own reasons. Sometimes one concern is the exact issue you're describing. By using another vendor outside of MS they can layer the security rather than putting all their eggs in a Microsoft designed basket. We sometimes call that a "security onion" in cyber.

I have no idea what the Linux version of that would even look like though. I imagine you'd just choose one of the many 3rd party EDR's from "random companies." It's another reason I asked the original question about how Sysadmins cope with Linux these days. MS has an entire suite of products designed to meet these security, regulatory, and compliance problems. Linux has... file permissions I guess?

1718627440 a day ago | parent [-]

If your think of running some EDR software in kernel mode, then my point is indeed don't do that. That just sounds like less security. Use the OS and run the reporting in userspace.

If you want integrity, first make everything executable immutable, the system is explicitly designed to work that way. That's why the FHS exists for. Then use something like Tripwire to monitor it.

To log access use auditd (https://www.baeldung.com/linux/auditd-monitor-file-access).

What else do you need to do?

mapontosevenths a day ago | parent [-]

> make everything executable immutable

How though? Presumably you mean we should trust the OS to do that?

Edit to be clear auditd has the same issue. We're trusting it to audit itself. However, we know that we cant trust it because rootkits are a thing. So now what?...

I guess we need a tool thats designed to be tamper proof to monitor it. We do that by introducing an external validation. A 2nd external system can vouch that hashes are what we expect, etc.

1718627440 a day ago | parent | next [-]

So you have an OS of which you have the source, which is binary reproducible and you can compile yourself if you want to. You want to make that more trustworthy by injecting a random blob, you can not inspect and which updates itself over the network controlled by a third party. I do not understand your threat model.

If you think your OS doesn't give you the correct answer to a read, than you need to run a second OS side-by-side and compare. If you think your OS is touching data you haven't told it to, you need to have a layer running below so you can check, i.e. virtualization, BIOS or hardware. If you think your OS is making network calls you haven't told it to, then you need to connect it via an intermediate host, that acts as a firewall.

I don't see what injecting a random blob into the OS gives you other than box ticking. Now you need to trust the OS and that other thing.

When your attacker gains control of your OS (so actually below root), than you are screwed anyways. Only having some layer independently will help you in that case. Having more code in your OS, won't help you at all, it will just add more attack surface.

mapontosevenths a day ago | parent [-]

> If you think your OS doesn't give you the correct answer to a read, than you need to run a second OS side-by-side and compare.

I mean, that's mostly right. IF the OS is already rootkit infected then installing an EDR won't fix it, as it mostly won't be able to tell that the answers it gets from the OS are incorrect. That's why you'll sometimes see bootable EDR tools used on machines that are suspected of already being compromised. It's a second OS to verify the first, exactly as you describe.

In practice that's not typically required because the EDR is usually loaded shortly after the OS is installed, and they're typically built with anti-tamper measures now. So we can mostly just assume that the EDR will be running when the malware is loaded. That allows us to do things like Kernel‑level monitoring for driver loads, module loads, and security‑relevant events (e.g., LSM/eBPF hooks on Linux, kernel callbacks/ETW on Windows).

By then layering on some behavioral analysis we can typically prevent the rootkit from installing at all, or at the very least get some logs and alerts sent before it can disable the EDR. It's also one reason these things don't just run in userland as you suggested above. They need kernel mode access to detect kernel mode malware, and they need low level IO access to independently verify that the OS is doing what it says it is when we call an API.

Your suggestion reminds me of the old 'chkrootkit' command on Linux. It's a great tool, if you don't already have a rootkit. In that case it just doesn't work. A modern EDR would have prevented the rootkit from installing an API hook in the first place (ideally).

> Only having some layer independently will help you in that case.

Sometimes it's more about detection, and sometimes it's more about prevention, but both are valuable. I would one day love to see a REAL solution, but for now I think EDR's are the least worst answer we have.

A better answer would be a modern OS built to avoid the weaknesses that make these bolt on afterthought solutions necessary, but neither Windows or Linux come anywhere close to being that. They both have too much history and have to preserve compatibility.

1718627440 15 hours ago | parent [-]

> A better answer would be a modern OS built to avoid the weaknesses that make these bolt on afterthought solutions necessary

That's basically my point. Plugging EDR into an OS, is getting you a different OS that contains a part of which you have only a binary blob, and which is changed by a third-party over the network. This means you need to be able to change parts of the OS over the network, which opens you to new attack surfaces and you now also have the possibility of incompatibilities between the core OS and your blob, since these are developed by different vendors.

When you have software, of which you have the source, you control the version, trust the vendor, run this in the kernel and still want to call that EDR, that is fine, but that doesn't seem to be what EDR companies like Crowdstrike are doing.

If all you do is use kernel hooks, than you are still trusting the kernel. If your low-level IO still queries things in the kernel, than you still trust the kernel. If low-level IO means below the kernel, than you are not modifying the OS, your "EDR" is the OS and you run another untrusted OS on top.

1718627440 a day ago | parent | prev [-]

>> make everything executable immutable

> How though? Presumably you mean we should trust the OS to do that?

If you don't trust the layer controlling the hardware (aka. the OS) then you need to do that in hardware.

Lapel2742 a day ago | parent | prev | next [-]

AFAIK they use Open-Xchange, Univention Corporate Server and other specialized (maybe customized?) an open solutions for telephony, interoperability and other tasks.

https://euro-stack.com/blog/2025/3/schleswig-holstein-open-s...

mapontosevenths a day ago | parent [-]

I've never used it. Does this actually replace AD and group policy effectively? Does it manage updates properly? Can it handle compliance tasks?

I've used other things that claimed to in the past and none came anywhere close in practice. They all turned out just to be LDAP with some NT4 style policies for windows and very little at all for the Linux clients. It was like traveling back in time to the Windows 2000 era of management.

Lapel2742 18 hours ago | parent [-]

> Does this actually replace AD and group policy effectively?

I do not know. They probably evaluated the solution before they made the decision.

In any case, continuing to use AD seems out of the question. Relying on US based software in 2025 and beyond is simply not a viable option for any administration that values its sovereignty. The US isn’t even hiding its hostility.

einpoklum a day ago | parent | prev | next [-]

I would disagree with you both about the past and the present and what's "janky", but - that's actually beside the point:

LibreOffice works just fine on _Windows_ - and that's what the majority of its users are running.

So, Schleswig-Holstein can switch to Linux, or not switch, or let specific agencies or individuals choose.

finchisko a day ago | parent | prev [-]

I really don’t get why there’s always this group of people who feel the need to constantly manage everything for others—like sysadmins, for example. Sure, there are valid scenarios where management makes sense, like printing or shared drives, but most of the stuff is just over the top. As a developer, I’m sick of all the constant restrictions—broken VPNs, stealth monitoring, and antivirus software that slows everything down. These "security measures" are supposed to help, but they just kill performance and cause frustration. At the end of the day, I just want my system to work smoothly without constant interference.

mapontosevenths a day ago | parent [-]

> I’m sick of all the constant restrictions

I think everyone hates it, but they're often legally required. Even when they aren't legally required, they usually are by insurance companies.

Nobody wants to be on the news the first time Becky in Marketing opens an email attachment she shouldn't.

*EDIT* I left out one of the biggest benefits: Dummies & Newbs. The world is filled with people who have never used a mouse before they started this job Last week and people who actually NEED the stupid warning stickers on their toasters. If you don't lock down their desktops your support costs will be astronomical and downtime will be constant. We know this because there was a time before these tools, and it largely sucked for everyone.

Did you know that you can bypass the windows 98 login screen by just clicking 'Cancel' instead of 'OK' at the login prompt? Nice and simple, right? That stupid button not only wrecked security it caused 10's or 100's of thousands of hours in lost work because people forgot their passwords, clicked Cancel, and then would call the help desk wondering why network shares didnt work. It would sometimes take hours to figure that all they had to do was reset the password and login properly.