Remix.run Logo
abhas9 3 days ago

You're right that FIM assumes the possibility of compromise, but that's exactly the point - it's a detection control, not a prevention control. Prevention (read-only mounts, immutable bits, restrictive permissions, etc.) is necessary but not sufficient. In practice, attackers often find ways around those measures - for example, through misconfigured deployments, command injection, supply chain attacks, or overly broad privileges.

File Integrity Monitoring gives you a way to prove whether critical code or configuration has been changed after deployment. That’s valuable not only for security investigations but also for compliance.

For example, PCI DSS (Payment Card Industry Data Security Standard) explicitly requires this. Requirement 11.5.2 states:

"Deploy a change-detection mechanism (for example, file-integrity monitoring tools) to alert personnel to unauthorized modification of critical content files, configuration files, or system binaries."

Sure, a "sufficiently advanced" attacker could try to tamper with the monitoring tool, but (1) defense in depth is about making that harder, and (2) good implementations isolate the baseline and reports (e.g. write-only to S3, read-only on app servers), which raises the bar considerably.

teraflop 3 days ago | parent | next [-]

I guess I can begrudgingly accept that this is an example of "defense in depth" but it doesn't seem like a very good one given how easily it can be bypassed. Like, you could equally well add "depth" by taking every password prompt and making it prompt for two different passwords, but of course that doesn't add any real security.

> for example, through misconfigured deployments, command injection, [...] or overly broad privileges.

Seems to me like it would be more useful to build something into your deployment process that verifies that permissions are set correctly.

I don't really buy that `mount -o ro` is inherently more prone to being misconfigured than `kekkai verify` or whatever.

> supply chain attacks

This wouldn't actually do anything to stop or detect supply chain attacks, right? Even if one of your dependencies is malicious, you're not going to be able to spot that by checking a hash against a deployment that was built with the same malicious code.

> good implementations isolate the baseline and reports (e.g. write-only to S3, read-only on app servers), which raises the bar considerably.

I don't see how that raises the bar at all. The weakness is that it's easy for an attacker to bypass the verifier on the app server itself. Making the hashes read-only in whatever place they're stored isn't a barrier to that.

> For example, PCI DSS (Payment Card Industry Data Security Standard) explicitly requires this.

This seems like the best actual reason for this software to exist. But if the point is just to check a compliance box, then I think it would make sense to point that out prominently in the README, so that people who actually have a need for it will know that it meets their needs. Similar to how FIPS-compliant crypto exists to check a box but everyone knows it isn't inherently any more secure.

catatsuy 2 days ago | parent [-]

You’re right that this doesn’t prevent compromise—it’s a detection control, not prevention. Things like read-only mounts or immutable bits are great, but in practice issues like command injection or misconfigured deployments still happen. FIM helps you know when files were changed and provides evidence for investigation or compliance.

viraptor 3 days ago | parent | prev [-]

> In practice, attackers often find ways around those measures

I really don't see this as a good explanation. You can say that about any security measure, but we can't keep slapping more layers that check the previous layer. At some point that action itself will cause issues.

> for example, through misconfigured deployments, command injection, supply chain attacks, or overly broad privileges.

Those don't apply if the file owner is not the app runner or the filesystem is read-only. If you can change the file in that case, you can disable the check. Same for misconfiguration and command injection.

> For example, PCI DSS

Ah, BS processes. Just say it's about that up front.

chucky_z 3 days ago | parent | next [-]

I've used FIM in the past to catch a CEO modifying files in real-time at a small business so I could ping him and ask him to kindly stop. It's not just about BS _processes_. :D

viraptor 3 days ago | parent [-]

That means CEO has access to do the changes. It's technically easier to remove that, than to insert FIM into the deployment process. (And will stop other unauthorised employees too) I mean, you already need a working deployment pipeline for FIM anyway, so enforce that?

chucky_z 2 days ago | parent [-]

The CEO would've found it very easy to remove the blocker in that case (me). This is the life of small tech businesses. Also, they were modifying configuration files (php-fpm configurations iirc) and not code.

FIM is very useful for catching things like folks mucking about with users/groups because you typically watch things like /etc/shadow and /etc/passwd, or new directories created under /home, or contents of /var/spool/mail to find out if you're suddenly spamming everyone.

catatsuy 2 days ago | parent [-]

That’s a great real-world story. Exactly the kind of unexpected modification FIM can help surface—not only security incidents, but also operational surprises.

catatsuy 2 days ago | parent | prev [-]

Compliance is definitely one use case, but not the only one. It’s also useful for catching unexpected local changes in real-world operations. The goal is to provide a lightweight FIM that can be added to existing apps without too much friction.