▲ | hagbard_c 3 days ago | |||||||
I run OpenWRT on a 'big' device, this being a container on a Proxmox-managed DL380 G7. It works fine in this context, performance is good enough to be able to easily saturate the gigabit fibre link without breaking into a sweat. Installing OpenWRT on such a device comes down to downloading openwrt-${version}-x86-64-rootfs.tar.gz and unpacking it in the target location. Boot the container or VM (or old PC or whatever) and follow the normal OpenWRT configuration procedure. Updating such an installation comes down to making a configuration backup in OpenWRT, unpacking the new distribution and restoring the configuration backup to the new install. Given the low resource requirements for such an installation it makes sense to first clone the working container or VM and performing the upgrade on one of the instances so you always have a working instance at hand. | ||||||||
▲ | zokier 3 days ago | parent [-] | |||||||
Sure, openwrt works. I too have run it on x86 vm at a time. That being said, there is lot that could be improved. My biggest gripe is the weird filesystem layout with overlays and stuff in /tmp and whatnot. I can see it being needed on tiny devices, but on bigger ones can I just have regular ext4/xfs gpt partitions please? Another thing is just replacing the tiny versions of software with regular ones, like busybox->gnu or dropbear->openssh etc. Systemd could be at least considered as init. All of this kind of things make sense when you consider openwrts origins. But on "big" system I'd just much rather have it be closer to "normal" Linux. | ||||||||
|