Remix.run Logo
pxeger1 2 days ago

My problem with curl|bash is not that the script might be malicious - the software I'm installing could equally be malicious. It's that it may be written incompetently, or just not with users like me in mind, and so the installation gets done in some broken, brittle, or non-standard way on my system. I'd much rather download a single binary and install it myself in the location I know it belongs in.

jerf 2 days ago | parent | next [-]

I've also seen really wonderfully-written scripts that, if you read them manually, allow you to change where whatever it is is installed, what features it may have, optional integration with Python environments, or other things like that.

I at least skim all the scripts I download this way before I run them. There's just all kinds of reasons to, ranging all the way from the "is this malicious" to "does this have options they're not telling me about that I want to use".

A particular example is that I really want to know if you're setting up something that integrates with my distro's package manager or just yolo'ing it somewhere into my user's file system, and if so, where.

inetknght 2 days ago | parent | next [-]

> I've also seen really wonderfully-written scripts that

I'll take a script that passes `shellcheck ./script.sh` (or, any other static analysis) first. I don't like fixing other people's bugs in their installation scripts.

After that, it's an extra cherry on top to have everything configurable. Things that aren't configurable go into a container and I can configure as needed from there.

sim7c00 2 days ago | parent | prev | next [-]

right? read before u run. if you cant make sense of it all, dont run. if you can make sense of it all, you're free to refactor it to your own taste :) saves some time usually. as you say, a lot are quite nicely written

groby_b 2 days ago | parent [-]

> read before u run

Lovely sentiment, not applicable when you actually work on something. You read your compiler/linker, your OS, and all libraries you use? Your windowing system? Your web browser? The myriad utilities you need to get your stuff done? And of course, you've read "Reflections on trusting trust" and disassembled the full output of whatever you compile?

The answer is "you haven't", because most of those are too complex for a single person to actually read and fully comprehend.

So the question becomes, how do you extend trust. What makes a shell script untrustworthy, but the executable you or the script install trustworthy?

spookie 2 days ago | parent | next [-]

Binaries in the Linux world are usually retrieved the "Official Way". You use a distro. Therefore you trust "them" and how they operate their package manager.

This is the "Unofficial Way".

homebrewer a day ago | parent | prev | next [-]

Non-system software, which is what often gets installed with this method, typically does not get root privileges on my systems, or at least is not expected to write anything into directories like /usr.

These scripts are often written by people who only know one OS well (if any), and if that OS is macOS, and you're on Linux (or FreeBSD, or whatever), you can expect them to do weird shit like sticking binaries into /usr/bin in circumvention of the package manager, or adding their own package repositories without asking you (and often not whitelisting just their packages, which allows them to e.g. replace glibc on your system without you noticing), etc.

It's not comparable to simply using the already installed software.

jerf a day ago | parent | prev | next [-]

"What makes a shell script untrustworthy, but the executable you or the script install trustworthy?"

Supply-chain attacks. Linux distros have a long history of being more hardened targets than "a static file on some much, much, much smaller project's random server".

Also things like linux packages or snaps or flatpaks are generally somewhat ringfenced by their nature. Here I don't mean for security reasons per se, but just by their nature, I have confidence a flatpak isn't going to start scribbling all over my user directory. A script may make any number of assumptions about what it is OK to do, where things can go, where to put them, what it can install, etc.

"Trust" isn't just about whether something is going to steal my cryptowallet or install a keylogger. It's about whether it breaks my reproducible ops setup, or will stick configuration in the seventeenth place in my system, or assumes other false things about how I want it set up that may cause other problems.

sim7c00 a day ago | parent | prev [-]

well read before u run was obviously for the shell script not gcc sources :'l. i dont think thats a fair comparisson. but you do make a good point. its why i write my own OS :D and yes, once that is up own toolchain would be next, as an experiment to see what'd be needed to be secure ,even forgetting ofc the hw we run on. wasnt planning to play with horrible acids to see what is in there, tho its possible.. :D. (it will never be finish in my life haha, i know...).

AndyMcConachie 2 days ago | parent | prev [-]

100% agree. The question of whether I should install lib-X for language-Y using Y's package management system or the distribution's package management system is unresolved.

Diti 2 days ago | parent [-]

It’s solved by Nix. Whichever package management you choose (nixpkgs or pip or whatever), the derivation should have the same hash in the Nix store.

(Nix isn’t the solution for OP’s problems though – Nix packages are unsigned, so it’s it’s basically backdoor-as-a-service.)

ants_everywhere a day ago | parent [-]

The Nix installer is one of the more shocking curl | bash experiences I've had.

It created users and groups on my system! And the uninstall script didn't clean it up.

mingus88 2 days ago | parent | prev | next [-]

My problem with it is that it encourages unsafe behavior.

How many times will a novice user follow that pattern until some jerk on discord drops a curl|bash and gets hits

IRC used to be a battlefield for these kinds of tricks and we have legit projects like homebrew training users it’s normal to raw dog arbitrary code direcly into your environment

SkiFire13 2 days ago | parent | next [-]

What would you consider a safer behaviour for downloading programs from the internet?

mingus88 2 days ago | parent | next [-]

You are essentially asking what is safer than running arbitrary code from the internet sight unseen directly into your shell and I guess my answer would be any other standard installation method!

The OS usually has guardrails and logging and audits for what is installed but this bypasses it all.

When you look at this from an attackers perspective, it’s heaven.

My mom recently got fooled by a scammer that convinced her to install remote access software. This curl pattern is the exact same vector, and it’s nuts to see it become commonplace

SkiFire13 a day ago | parent | next [-]

> You are essentially asking what is safer than running arbitrary code from the internet

No, I'm asking what is a safer method when I want to install some code from the internet.

> The OS usually has guardrails and logging and audits for what is installed but this bypasses it all.

Not everything is packaged or up-to-date in the OS

> My mom recently got fooled by a scammer that convinced her to install remote access software.

Remote access software are packaged in distros too.

thayne a day ago | parent | prev [-]

> My mom recently got fooled by a scammer that convinced her to install remote access software.

But I bet she didn't install it with curl piped to bash. The point isn't that curl|bash is safe, but that it isn't inherently more dangerous than downloading and running a program.

thewebguyd 2 days ago | parent | prev | next [-]

Use your distro's package manager and repos first and foremost. Flatpak is also a viable alternative to distribution, and if enabled, comes along with some level of sandboxing at least.

"Back in the day" we cloned the source code and compiled ourself instead of distributing binaries & install scripts.

But yeah, the problem around curl | bash isn't the delivery method itself, it's the unsafe user behavior that generally comes along with it. It's the *nix equivalent of downloading an untrusted .exe from the net and running it, and there's no technical solution for educating users to be safe.

Safer behavior IMO would be to continue to encourage the use of immutable distros (Fedora silverbue and others). RO /, user apps (mostly) sandboxed, and if you do need to run anything untrusted, it happens inside a distrobox container.

BHSPitMonkey 2 days ago | parent | next [-]

I've installed untold thousands of .deb packages in my lifetime - often "officially" packaged by Debian or Ubuntu, but in many cases also from a software vendor's own apt repository.

Almost every one contains preinst or postinst scripts that are run as root, and yet I can count on zero hands the number of times I've opened one up first to see what it was actually doing.

At least a curlbash that doesn't prompt me for my password is running as an unprivileged user! /shrug

sim7c00 2 days ago | parent | prev | next [-]

a lot of useful packages are not in package managers, or are in old versions that lack features u need. so its quite common to need to get around that...

SkiFire13 a day ago | parent | prev | next [-]

Getting every software into every distro is not feasible, it's a NxM problem. Sometimes this encourages the use of third-party repositories, which I would argue is even unsafer because it requires root access.

Flatpak is a nice suggestion but unfortunately it doesn't seem to work nicely for CLIs.

> "Back in the day" we cloned the source code and compiled ourself instead of distributing binaries & install scripts.

Isn't that the same thing with the extra step of downloading a git repo?

papichulo2023 2 days ago | parent | prev | next [-]

Funny enough clone and compile is easier now than ever before. You can ask a llm to create a docker to compile any random program and most of the time will be okay.

hsbauauvhabzb 2 days ago | parent | prev [-]

R/O root means a a binary will fail to install, but won’t stop my homedir being backdoored in a DD Orion to the huge waste of time that attempting an RO root would be.

bawolff 2 days ago | parent | prev | next [-]

Literally anything else.

Keep in mind that its possible to detect when someone is doing curl | bash and only send the malicious code when curl is being piped, to make it very hard to detect.

SoftTalker 2 days ago | parent [-]

curl | tee foo.sh

and then inspect foo.sh and then (maybe) cat foo.sh | bash

Does that avoid the issue?

broken-kebab 2 days ago | parent [-]

Yes, but will you do it really?

codedokode 2 days ago | parent | prev [-]

Software should run in a sandbox. Look at Android for example.

troupo 2 days ago | parent | prev [-]

> My problem with it is that it encourages unsafe behavior.

Then why don't Linux distributions encourage safe behaviour? Why do you still need sudo permissions to install anything on most Linux systems?

> How many times will a novice user follow that pattern until some jerk on discord

I'm not a novice user and I will use this pattern because it's frankly easier and faster, especially when the current distro doesn't have some combination of things installed, or doesn't have certain packages, or...

keyringlight 2 days ago | parent | next [-]

I think a lot of this comes down to assumptions about the audience and something along the lines of "it's not a problem until it is". It's one aspect I wonder about with migrants from windows, and all the assumptions or habits they bring with them. Microsoft has been trying to put various safety rails around users for the past 20 years since they started taking security more seriously with xp, and that gets pushback every time they try and restrict or warn.

ChocolateGod 2 days ago | parent | prev | next [-]

> Why do you still need sudo permissions to install anything on most Linux systems?

You don't with Flatpak or rootless containers, that's partially why they're being pushed so much.

They don't rely on setuid for it either

johnisgood 2 days ago | parent [-]

Flatpak and AppImage.

Or download & compile & install to a PREFIX (e.g. ~/.local/pkg/), and use a symlink-manager to install to e.g. ~/local (and set MANPATH accordingly, too). Make sure PATH contains ~/.local/bin, etc. It does not work with Electron apps though. I do "alias foo="cd ... && ./foo".

aragilar a day ago | parent | prev | next [-]

Because you're making system-wide changes which affect more than just your user?

There are and there has been distros that install per user, but at some level something needs to manage the hardware and interfaces to it.

troupo a day ago | parent [-]

> Because you're making system-wide changes which affect more than just your user?

Am I? How am I affecting other users by installing something for myself?

Even Windows has had "Install just for this user or all users?" for decades

mingus88 2 days ago | parent | prev | next [-]

I’m not a novice user anymore either, but I care about my security and privacy.

When I see a package from a repo, I have some level of trust. Same with a single binary from GitHub.

When I see a curl|bash I open it up and look at it. Who knows what the heck is doing. It does not save me any time and in fact is a huge waste of time to wade through random shell scripts which follow a dozen different conventions because shell is ugly.

Yes you could argue an OS package runs scripts too that are even harder to audit but those are versioned and signed and repos have maintainers and all kinds of things that some random http GET will never support.

You don’t care? Cool. Doesn’t mean it’s good or safe or even convenient for me.

troupo a day ago | parent [-]

Repos and maintainers etc. are just a long unauditable supply chain [1]. And everyone is encouraged to blindly trust this chain with sudo access.

It's worse than that. If your distro doesn't have some package, you're encouraged to just add PPA repos and blindly trust those.

Quite a few companies run their own repos as well, and adding their packages is again `sudo add repo; sudo install`

Yes, it's not as egregious as just `curl | bash`, but it's not as far removed from it as you think.

[1] E.g. https://en.wikipedia.org/wiki/XZ_Utils_backdoor

umanwizard 2 days ago | parent | prev [-]

> Why do you still need sudo permissions to install anything on most Linux systems

Not guix :)

One of the coolest things about it.

IgorPartola 2 days ago | parent | prev | next [-]

This exactly. You never know what it will do. Will it simply check that you have Python and virtualenv and install everything into a single directory? Or will it hijack your system by adding trusted remote software repositories? Will it create new users? Open network ports? Install an old version of Java it needs? Replace system binaries for “better” ones? Install Docker?

Operating systems already have standard ways of distributing software to end users. Use it! Sure maybe it takes you a little extra time to do a one off task of adding the ability to build Debian packages, RPM, etc. but at least your software will coexist nicely with everything else. Or if your software is such a prima-donna that it needs its own OS image, package it in a Docker container. But really, just stop trying to reinvent the wheel (literally).

stouset 2 days ago | parent | prev | next [-]

Yes! What I really want from something like this is sandboxing the install process to give me a guaranteed uninstall process.

mjmas 2 days ago | parent | next [-]

tinycorelinux reinstalls its extensions into a tmpfs every boot which works nicely. (and you can have different lists of extensions that get loaded)

hsbauauvhabzb 2 days ago | parent | prev [-]

Why would you possibly want to remove my software?

ChocolateGod 2 days ago | parent [-]

This reminded me how if you wanted to remove something like cPanel back in the day your really only option was to just reinstall the whole OS.

1vuio0pswjnm7 2 days ago | parent | prev | next [-]

Many times a day both in scripts and interactively I use a small program I refer to as "yy030" that filters URLs from stdin. It's a bit like "urlview" but uses less complicated regex and is faster. There is no third party software I use that is distributed via "curl|bash" and in practice I do not use curl or bash, however if I did I might use yy030 to extract any URLs from install.sh something like this

    curl https://example.com/install.sh|yy030
or

    curl https://example.com/install.sh > install.sh
    yy030 < install.sh
Another filter, "yy073", turns a list of URLs into a simple web page. For example,

    curl https://example.com/install.sh|yy030|yy073 > 1.htm
I can then open 1.htm in an HTML reader and select any file for download or processing by any program according to any file associations I choose, somewhat like "urlview".

I do not use "fzf" or anything like that. yy030 and yy073 are small static binaries under 50k that compile in about 1 second.

I also have a tiny script that downloads a URL received on stdin. For example, to download the third URL from install.sh to 1.tgz

     yy030 < install.sh|sed -n 3p|ftp0 1.tgz
"ftp" means the client is tnftp

"0" means stdin

nikisweeting 2 days ago | parent | prev | next [-]

This is always the beef that I've had with it. Particularly the lack of automatic updates and enforced immutable monotonic public version history. It leads to each program implementing its own non-standard self-updating logic instead of just relying on the system package managers. https://docs.sweeting.me/s/against-curl-sh

shadowgovt 2 days ago | parent | prev [-]

Much of the reason `curl | bash` grew up in the Linux ecosystem is that "single binary that just runs" approach isn't really feasible (1) because the various distros themselves don't adhere to enough of a standard to support it. Windows and MacOS, being mono-vendor, have a sufficiently standardized configuration that install tooling that just layers a new application into your existing ecosystem is relatively straightforward: they're not worrying about what audio subsystem you installed, or what side of the systemd turf war your distro landed on, or which of three (four? five?) popular desktop environments you installed, or whether your `/dev` directory is fully-populated. There's one answer for the equivalent of all those questions on Mac and Win so shoving some random binary in there Just Works.

Given the jungle that is the Linux ecosystem, that bash script is doing an awful lot of compatibility verification and alternatives selection to stand up the tool on your machine. And if what you mean is "I'd rather they hand me the binary blob and I just hook it up based on a manifest they also provided..." Most people do not want to do that level of configuration, not when there are two OS ecosystems out there that Just Work. They understandably want their Linux distro to Just Work too.

(1) feasible traditionally. Projects like snap and flatpak take a page from the success Docker has had and bundle the executable with its dependencies so it no longer has to worry about what special snowflake your "home" distro is, it's carrying all the audio / system / whatever dependencies it relies upon with it. Mostly. And at the cost of having all these redundant tech stacks resident on disk and in memory and only consolidateable if two packages are children of the same parent image.

fouc 2 days ago | parent | next [-]

I first encountered `curl | bash` in the macOS world, most specifically with installing the worst package manager ever, homebrew, which first came out in 2009. Since then it's spread.

I call it the worst because it doesn't support installing specific versions of libraries, doesn't support downgrading, etc. It's basically hostile and forces you to constantly upgrade everything, which invariably leads to breaking a dependency and wasting time fixing that.

These days I mostly use devbox / nix at the global level and mise (asdf compatible) at the project level.

ryandrake 2 days ago | parent | next [-]

Ironic, because macOS's package management system is supposed to be the simplest of all! Applications are supposed to just live in /Applications or ~/Applications, and you're supposed to be able to cleanly uninstall them by just deleting their single directory. Not all 3rd party developers seem to have gotten that memo, and you frequently see crappy and unnecessary "installers" in the macOS world.

There may be good or bad reasons why Homebrew can't use the standard /Applications pattern, but did they have to go with "curl | bash"?

Wowfunhappy 2 days ago | parent | next [-]

The Applications folder system does work really well for GUI apps! It's not really made for command line apps.

For command line apps, the equivalent would probably be statically-compiled binaries you can just drop somewhere in your PATH, e.g. /usr/local/bin/. For programs that are actually built this way (which I would personally call "the correct way") this works great!

Nab443 2 days ago | parent [-]

I would not call apps built statically "the correct way". It offers benefits but also drawbacks. One of them being that you can't update statically linked libraries in it with security fixes without replacing the binary completely, which can be an issue if the context does not allow it (unsupported proprietary software, lost dependency code, ...). It can also lead to resource consumption faster, which can be an issue in resource constrained systems.

int_19h a day ago | parent | next [-]

If the app is actively maintained, it will update the dependency to fix the security issue.

If the app is not actively maintained, unless trivial, it likely has unpatched vulnerabilities of its own anyway.

And on macOS, if the app is not actively maintained, it usually breaks after a couple major releases regardless of anything else, because Apple doesn't believe in backwards compatibility.

Wowfunhappy 2 days ago | parent | prev [-]

I know, I said that I would call it the correct way. :) I'm aware of the drawbacks, I just think they're clearly outweighed by the benefits.

If nothing else, consider that the limitations of a statically linked binary match those of a traditional Mac application bundle. While Mac apps are usually dynamically linked, they also include all of their dependencies within the app bundle. I suppose you could argue it's technically possible to open an app bundle and replace one of the dylibs, but this is clearly not an intended use case; if nothing else, you're going to break the code signature.

thewebguyd 2 days ago | parent | prev | next [-]

> Not all 3rd party developers seem to have gotten that memo

This frustrates me to no end on macOS. Not only do you see crappy installers like you said, but a ton of applications now aren't even self contained in ~/Applications like they should be.

Apps routinely shit all over ~/Library when they don't need to, and don't clean up after themselves so just deleting the bundle, while technically 'uninstalls' it, you still have stuff left over, and it can eat up disk space fast. Same crap that Windows installers do, where they'll gladly spread the app all over your file system and registry but the uninstaller doesn't actually keep track of what went where so it'll routinely miss stuff. At least Windows as a built-in disk clean up tool that can recognize some of this for you, macOS will just happily let apps abuse your file system until you have to go digging.

Package managers on Linux solved this problem many, many years ago and yet we've all collectively decided to just jump on the curl | bash train and toss the solution to the curb because...reasons?

ryandrake 2 days ago | parent | next [-]

Yep, same problem on Windows. It's almost always a mistake to give 3rd party developers unrestricted access to your filesystem, because they don't care and will shit their files all over it.

I wish more applications were distributed by the Mac App Store, because I believe App Store distributed apps are more strongly sandboxed and may not allow developers to abuse your system like this.

ChocolateGod 2 days ago | parent [-]

Mac apps outside the app store can still be sandboxed, but they have to be signed.

shadowgovt 2 days ago | parent | prev [-]

"Reasons" is "Nobody wants to wait for the package maintainers to decide that their favorite new shiny toy is enough a priority to update it to a version recent enough to match the online documentation for the new shiny toy," mostly.

As I mentioned somewhere side-thread: Debian Unstable is only three minor versions behind the version of Rust that the Rust team is publishing as their public release, but Debian Stable is three years old. For some projects, that dinosaur-times speed. If I want to run Debian Stable for everything except Rust, I'm curl-bashing it.

ryandrake 2 days ago | parent [-]

As a user, if you need to run recent versions of your tools, I'd argue Debian (at least Debian Stable) is not for you. Luckily we have many choices among Linux distributions!

int_19h a day ago | parent [-]

There's nothing wrong with Debian for running recent versions of your dev tools; you just shouldn't expect to get them from the official Debian repositories. But there are third-party repositories for things like e.g. latest Node versions. I would expect there to be something for Rust, as well, but apparently they are also packaging rustup now.

CharlesW 2 days ago | parent | prev [-]

> …did they have to go with "curl | bash"?

That's one of many options, documented at the first text link of the home page. https://docs.brew.sh/Installation

ryandrake 2 days ago | parent [-]

Wow, they even have a .pkg installer. Had no idea. Is this new?

CharlesW 2 days ago | parent [-]

Without going too far down the rabbit hole, it looks like the maintainers added it in 2023. In the process, I was reminded that the installer initially required Ruby! (/usr/bin/ruby -e "$(curl…)

FYI, mas is the equivalent of a package manager for macOS apps (a.k.a. a CLI for App Store). https://github.com/mas-cli/mas

Other than brew, I use mise for everything I can. https://mise.jdx.dev/

tghccxs 2 days ago | parent | prev [-]

Why is homebrew the worst? Do you have a recommendation for something better? I default to homebrew out of inertia but would love to learn more.

xmodem 2 days ago | parent | next [-]

I've been using MacPorts since before homebrew existed and never switched away.

fouc 2 days ago | parent | prev | next [-]

Lately I've been using devbox (nix wrapper) for my homebrew-like needs via "devbox global add <whatever>", for project specific setup I stick with mise (asdf-compatible)."

I don't like homebrew because I've been burnt multiple times because it often auto-updates when you least want it to and breaks project dependencies.

And there's no way to downgrade to a specific version. Real package managers typically support versioning.

antihero 2 days ago | parent [-]

If you're depending on specific versions, don't use a general system package manager, use something like mise or asdf.

CharlesW 2 days ago | parent | prev | next [-]

MacPorts is a good alternative, but you'll find that Homebrew is absolutely not the worst. Personally, I find brew fast and reliable. Look at mise (`brew install mise`) for managing any developer dependencies. https://mise.jdx.dev/

ryao 2 days ago | parent | prev | next [-]

I am a fan of Gentoo Prefix. Others like pkgsrc.

fouc a day ago | parent [-]

I've heard of some people using pkgsrc as their package manager in macOS. First time I heard about Gentoo Prefix. neat!

chme 2 days ago | parent | prev | next [-]

nix possible with lix, if you can stomach nix syntax.

fouc a day ago | parent [-]

devbox is a nice wrapper for nix syntax. thanks for the tip about lix, it looks like I can use devbox+lix instead of the determinate nix installer.

snickerdoodle12 2 days ago | parent | prev [-]

apt and yum/dnf are pretty great

JoshTriplett 2 days ago | parent | prev [-]

Statically link a binary with musl, and it'll work on the vast majority of systems.

> they're not worrying about what audio subsystem you installed

Some software solves this by autodetecting an appropriate backend, but also, if you use alsa, modern audio systems will intercept that automatically.

> what side of the systemd turf war your distro landed on

Most software shouldn't need to care, but to the extent it does, these days there's systemd and there's "too idiosyncratic to support and unlikely to be a customer". Every major distro picked the former.

> or which of three (four? five?) popular desktop environments you installed

Again, most software shouldn't care. And `curl|bash` doesn't make this any easier.

> or whether your `/dev` directory is fully-populated

You can generally assume the devices you need exist, unless you're loading custom modules, in which case it's the job of your modules to provide the requisite metadata so that this works automatically.