|
| ▲ | icehawk 3 days ago | parent | next [-] |
| The replies talking about portability are wild, my irssi instance started on a Pentium 90, and is running on an AMD EPYC the two commands it actually took were: 1) scp 2) dpkg --set-selections |
|
| ▲ | firefax 3 days ago | parent | prev | next [-] |
| Reminds me of the "But I Don't Want To Cure Cancer. I Want To Turn People Into Dinosaurs" meme[1] They don't want to apt install, they want to use docker :-) [1] https://knowyourmeme.com/memes/but-i-dont-want-to-cure-cance... |
| |
|
| ▲ | INTPenis 3 days ago | parent | prev | next [-] |
| It's not absurd. First of all I've been using immutable Linux for 3 years so running Irssi in a container makes the most sense. Of course I'd probably just run it inside a distrobox container instead. And also I've been using a shell server for irssi for many years so it's not that relevant. But secondly, containerization, despite its vulnerabilities through the years, does add a layer of security to applications. And we must not forget that irc clients have been exploited in the past. Remember the old adage, never irc as root. |
|
| ▲ | nodja 3 days ago | parent | prev | next [-] |
| For my homelab: portable state. I don't use this image specifically but I use many others. I put docker-compose files in ~/configdata/_docker/ The docker-compose files always mount volumes inside the ~/configdata/ directory. So let's say irssi has a config directory to mount I'd mount it to ~/configdata/irssi/config/ Then I can just run a daily backup on ~/configdata/ using duplicati or whatever differential backup tool of your choice and be able to restore application state, easily move it to another server, etc. |
| |
| ▲ | crtasm 3 days ago | parent | next [-] | | For software designed to run under your user account like irssi it's pretty much the same, look in ~/.config and ~/.local/share | |
| ▲ | globular-toast 3 days ago | parent | prev [-] | | Sure, this makes sense for a server, but irssi is a client. This is just a program running on your computer. You don't need a "homelab" or any nonsense like that. |
|
|
| ▲ | indigodaddy 3 days ago | parent | prev | next [-] |
| And then after that turn it back into a binary that starts it up as a firecracker microvm! Lol, I mean it's kinda crazy yeah, but I the isolation is pretty good/cool. https://bottlefire.dev/ |
|
| ▲ | mingus88 3 days ago | parent | prev | next [-] |
| I run a ton of apps like this. Look at it the other way. Why muck up my OS with a bunch of tiny apps? Who knows what version I’ll pull in my repo today. Chances are good it’s outdated with weird patches. The docker image is built by the devs. All the proper dependencies are baked into the image. It’s going to run exactly as intended every time, no surprises. And I can pick up the docker file and my configs and run it exactly the same on any OS. |
| |
| ▲ | malux85 3 days ago | parent | next [-] | | I love watching this in tech, the pendulum swings, this is static linking in another dress, Soon everyone adopts this, and then someone complains “why is there 500 libc libraries on my machine” or “there was critical bug and I had to update 388 containers - and some maintainers didn’t update and it’s a giant mess!” Then someone will invent dynamic underlying container sharing (tm) and the pendulum will swing the other way for a bit, and in 2032, one dev will paste your comment in a slightly different form - why muck up my mindvisor with a bunch of tiny apps? Isolated runtimes are built by the devs, And so on, back and forward forever | | |
| ▲ | sunrunner 3 days ago | parent | next [-] | | > And so on, back and forward forever My god, we've discovered a genuine perpetual motion machine. > this is static linking in another dress Although static linking usually seems to result in small binaries that just run on the target machine while this needs all the Docket machinary (and the image sizes can get horrendous) | | |
| ▲ | 7bit 3 days ago | parent [-] | | It's worse, it's reverse perpetual motion. It takes an infinite amount of energy to achieve something you could achieve with a tiny finite amount! |
| |
| ▲ | vrighter 3 days ago | parent | prev | next [-] | | Noooo!!! Packaging all your dependencies by static linking is bad! Packaging all your dependencies as shared libraries into one tar file, separately for each app, is the way to go and needing another runtime just to be able to run your program (not for it to actually function.... just to run it). The final artefact is still only one file, but without the benefits of link-time-optimization! | |
| ▲ | zoobab 3 days ago | parent | prev [-] | | We need a static linux distro, because i prefer to have a portable app that works on all linux distros. | | |
| |
| ▲ | stonogo 3 days ago | parent | prev [-] | | The docker image is built by the devs.
Not in this case, it isn't.All of the things you describe are just "package manager, but outside distro control," which is fine I guess but not really a meaningful answer. | | |
| ▲ | lmm 3 days ago | parent [-] | | I think the real answer is that distro packaging sucks; it tends to involve arcane distro-specific tools and introduce as many or more bugs than it fixes (with the added problem of playing hot potato with the bug reports), on top of delaying updates. Really, what do you gain by using distro packages? (I know the answer is supposedly that you get a set of well-tested versions of your applications that play nice with each other, but that's rarely been delivered in practice) | | |
| ▲ | stonogo 2 days ago | parent [-] | | I don't disagree with that assessment, but I'm not sure docker's any different. It's just a different arcane set of tools that introduces as many failure points as it fixes (with the added problem of supply chain attacks) on top of having to use all the distro stuff anyway. So, while I use the hell out of docker, I don't really regard it as an improvement on (or really an alternative to) distro packages. I think it's a better tool for solving complex deployments, but e.g. irssi isn't really in that camp. | | |
| ▲ | lmm 2 days ago | parent [-] | | I think that like it or not, Docker has managed to win mindshare in a way that no single distro's package management ever did. Application developers could never get away with publishing only RPMs or only debs (and whether the same deb would work on Debian and Ubuntu was always a risky question), but everyone runs Docker; even the alternatives like Podman or Moby feel the need to be compatible with existing Docker packages. | | |
| ▲ | stonogo 2 days ago | parent [-] | | Yeah, that's probably true among developers. Among other classes of users, providing a deb or an rpm (or some combination of package manager formats) has been pretty normal. Enterprise software like Slack has been doing this for ages, Microsoft distributed Teams that way for years, the CUDA stack is rpm/deb, etc. Outside of the dev world, docker is basically a signal that your devops people should be on the project. The most common question used to be "why no installer" but nowadays users just use the "app store" (Gnome's Software or KDE's Discover) to Get Stuff, and wouldn't be able to tell you if asked whether what they just installed was a native package or a Flatpak. I do agree that Docker is ubiquitous in the development world, but I think the fraction of people even aware enough of packaging to have an opinion is vanishingly small. |
|
|
|
|
|
|
| ▲ | globular-toast 3 days ago | parent | prev | next [-] |
| I can see a use running in a Kubernetes cluster or something. Not really sure why you would but I'm sure someone, somewhere has found it useful before. What I'm confused about is why it's notable enough to be on the front page of HN. If you needed this and you use K8s you could trivially write this Dockerfile yourself. |
|
| ▲ | squigz 3 days ago | parent | prev | next [-] |
| Better yet, apt install weechat! |
|
| ▲ | keyle 3 days ago | parent | prev | next [-] |
| I ran irssi for years. I agree... Maybe being paranoid about security? |
| |
| ▲ | jagrsw 3 days ago | parent | next [-] | | I don't think it's being paranoid. It's a remotely controlled parser. Fuzzing has turned up some of bugs in irssi and weechat over the years. Things like malformed color codes, DCC filenames, or even basic protocol messages led to crashes. I personally use weechat inside nsjail on a raspberry pi (isolated rpi is enough here, but just for fun): https://github.com/google/nsjail/tree/master/configs | | |
| ▲ | vrighter 3 days ago | parent [-] | | so the application crashes inside the container, and the container is restarted, vs the application crashes outside the container and it is restarted. What's the difference? | | |
| ▲ | keyle 3 days ago | parent [-] | | Well, the difference is that someone could PoTenTiAlLY spawn a shell if they get their way. So between server access as a user and container access (if it has a shell), it does make a difference. A good book on this was "Hacking: The Art of Exploitation". My argument though is that irssi is that old, I think automatic file receiving (DCC) is off by default and it has sensible defaults and a long history of being reliable(?) |
|
| |
| ▲ | Smar 3 days ago | parent | prev [-] | | Containers are not the best option for security. VMs and/or a MAC are better. | | |
|
|
| ▲ | alpb 3 days ago | parent | prev | next [-] |
| Maintainers had a project where they ran everything in containers. The project had helped docker itself and the ecosystem by allowing some interesting software to be containerized. |
|
| ▲ | vorpalhex 3 days ago | parent | prev | next [-] |
| Poor mans abstraction. Docker swarm makes a cheap node pool from random hardware. Compose makes all your apps and config live in git. You don't _need_ docker, but if you are already set up for it then it's a boon. Adding an app for me to be very available across a fleet of hardware with ceph backed storage is a one-liner. |
| |
| ▲ | AdieuToLogic 3 days ago | parent [-] | | > Adding an app for me to be very available across a fleet of hardware with ceph backed storage is a one-liner. But irssi is a chat client: About
Irssi is a modular text mode chat client. It comes with IRC
support built in.[0]
0 - https://irssi.org/ | | |
| ▲ | zepolen 3 days ago | parent | next [-] | | Don't be ridiculous, IRC is not a protocol that remembers, you need High Availability otherwise if the IRC client goes down you've lost important messages from bloodninja that you can never find again. | |
| ▲ | vorpalhex 3 days ago | parent | prev [-] | | And I want my irs client to be around and keep my history. If it is a tool I use every day then it lives in my git repo. |
|
|
|
| ▲ | hn-ifs 3 days ago | parent | prev | next [-] |
| Came here to ask why you'd want to run an app in docker. Genuinely don't get it. Sure the app doesn't touch the host system so there's isolation there, but the extra overhead doesn't seem justified to me. I'm not docker expert, so correct me if I'm wrong, but isn't this running a striped down version of Linux to run the app? Lighter than a full VM but... Yeah I don't get it. |
| |
| ▲ | keyle 3 days ago | parent [-] | | Docker on Linux is a pretty thin layer of abstraction, but still, I prefer to run stuff raw metal whenever possible; which today even raw metal isn't quite common. |
|
|
| ▲ | neilv 3 days ago | parent | prev | next [-] |
| I can guess a reason: persistence of your IRC server connection(s), across device sessions, and maybe switchable between devices. Without using an IRC bouncer. So this this would a turnkey way to run this somewhere centralized and persistent, and then you connect to it however you connect to that Docker container (e.g., SSH, remote desktop of some kind). Of course, a non-Docker way to achieve simple persistence would be to just run a character-terminal IRC client in an SSH-able shell account (or VPS or AWS EC2), inside a `screen` or `tmux` session that can be detached and reattached when SSH-ing in from whatever devices. (Persistence of your IRC server connections means things like you can see what you missed in scrollback, you aren't being noisy in your channels with join and part messages, you preserve your channel operator status and other channel modes without relying on bots, and you aren't leaking so much info about your movements in real time to random crazy people who hang out in Internet chat rooms.) (Also, early on, if your leet channels attracted trolls, remaining connected meant that whatever automated countermeasures your client had could help defend the channel. Also, the more people who had channel operator status, the harder it would be for an attacker who, say, "netsplit" to hack ops, to de-op them all, before a remaining op's scripts detected the mass-deop attack, and took out the attacker. Also, your persistence bouncer or shell account obscured your real IP address, so if an attacker targeted your client's IP addr but not your home addr, such as with a protocol or flood attack, you could more likely get back on quickly. Trolls were often annoying, but it was also cyberpunk satisfying when your channel made short work of them.) |
| |
| ▲ | r4ge 3 days ago | parent [-] | | Back in the day I would keep an old PC running Linux in the closet just for my IRC and shell needs. Having a vanity domain name was a must if you were lucky enough to have a static IP. I remember undernet adding support to hide your IP once you created an account. |
|
|
| ▲ | boogerwukka 3 days ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | CGamesPlay 3 days ago | parent | prev [-] |
| My production k8s cluster doesn't have apt. Now I can deploy this! |