Remix.run Logo
dusted 3 days ago

I recently used it to boot a ~1996 Compaq Presario from CD-Rom to image the hard-drive to a USB stick before wiping it for my retro-computer fun :)

It's kind of sad to hear "adult" people claim in all seriousness that it's reasonable that a kernel alone spends more memory than the minimum requirement for running Windows 95, the operating system with kernel, drivers, a graphical user interface and even a few graphical user-space applications.

slim 3 days ago | parent | next [-]

I got this insight from a previous thread : you can run linux with gui on the same specs as win 95 fine if your display resolution is 640x480. The framebuffer size is the issue

LeFantome 3 days ago | parent [-]

That and the fact that everything is 64 bit now. The Linux kernel is certainly much bigger though and probably has many more drivers loaded.

It is not one factor but the size of a single bitmap of the screen is certainly an issue.

zoeysmithe 3 days ago | parent | prev [-]

I mean why is that a problem? Win95 engineering reflects the hardware of the time, the same way today's software engineering reflects the hardware of our time. There's no ideal here, there's no "this is correct," etc its all constantly changing.

This is like car guys today bemoaning the simpler carburetor age or the car guys before them bemoaning the model T age of simplicity. Its silly.

There will never be a scenario where you need all this lightweight stuff outside of extreme edge cases, and there's SO MUCH lightweight stuff its not even a worry.

Also its funny you should mention win95 because I suspect that reflects your age, but a lot of people here are from the dos/first mac/win 2.0 age, and for that crowd win95 was the horrible resource pig and complexity nightmare. Tech press and nerd culture back then was incredibly anti-95 for 'dumbing it all down' and 'being slow' but now its seen as the gold standard of 'proper computing.' So its all relative.

The way I see hardware and tech is that we are forced to ride a train. It makes stops but it cannot stop. It will always go to the next stop. Wanting to stay at a certain stop doesn't make sense and as in fact counter-productive. I wont go into this, but linux on the desktop could have been a bigger contender if the linux crowd and companies were willing to break a lot of things and 'start over' to be more competitive with mac or windows, which at he time did break a lot of things and did 'start over' to a certain degree.

The various implementations of linux desktop always came off clunky and tied to unix-culture conventions which dont really fit the desktop model, which wasn't really appealing for a lot of people, and a lot of that was based on nostalgia and this sort of idealizing old interfaces and concepts. I love kde but its definitely not remotely as appealing as win11 or macos gui and ease of use.

In other words, when nostalgia isn't pushed back upon, we get worse products. I see so much unquestionable nostalgia in tech spaces, I think its something that hurts open source projects and even many commercial ones.

vel0city 3 days ago | parent | next [-]

I agree with this take. Win95's 4MB minimum/8MB recommended memory requirement and a 20MHz processor is seen as the acceptable place to draw the line but there were graphical desktops on the market before that on systems with 128K of RAM and 8MHz processors. Why aren't we considering Win95's requirements as ridiculously bloated?

zoeysmithe 2 days ago | parent [-]

Yep, at the time the Amiga crowd was laughing at the bloat. But now its suddenly the gold standard on efficiency? I think a lot of people like to be argumentative because they refuse to understand they are engaging in mere nostalgia and not actually anything factual or logical.

muyuu 2 days ago | parent | prev | next [-]

if you can compile the kernel though, there is no reason that W95 should be any smaller than your specifically compiled kernel - in fact it should be much bigger

however this is of course easier said than done

lproven 2 days ago | parent | prev [-]

> There will never be a scenario where you need all this lightweight stuff

I think there are many.

Some examples:

* The fastest code is the code you don't run.

Smaller = faster, and we all want faster. Moore's law is over, Dennard scaling isn't affordable any more, smaller feature sizes are getting absurdly difficult and therefore expensive to fab. So if we want our computers to keep getting faster as we've got used to over the last 40-50 years then the only way to keep delivering that will be to start ruthlessly optimising, shrinking, finding more efficient ways to implement what we've got used to.

Smaller systems are better for performance.

* The smaller the code, the less there is to go wrong.

Smaller doesn't just mean faster, it should mean simpler and cleaner too. Less to go wrong. Easier to debug. Wrappers and VMs and bytecodes and runtimes are bad: they make life easier but they are less efficient and make issues harder to troubleshoot. Part of the Unix philosophy is to embed the KISS principle.

So that's performance and troubleshooting. We aren't done.

* The less you run, the smaller the attack surface.

Smaller code and less code means fewer APIs, fewer interfaces, less points of failure. Look at djb's decades-long policy of offering rewards to people who find holes in qmail or djbdns. Look at OpenBSD. We all need better more secure code. Smaller simpler systems built from fewer layers means more security, less attack surface, less to audit.

Higher performance, and easier troubleshooting, and better security. There's 3 reasons.

Practical examples...

The Atom editor spawned an entire class of app: Electron apps, Javascript on Node, bundled with Chromium. Slack, Discord, VSCode: there are multiple apps used by tens to hundreds of millions of people now. Look at how vast they are. Balena Etcher is a, what, nearly 100 MB download to write an image to USB? Native apps like Rufus do it in a few megabytes. Smaller ones like USBimager do it in hundreds of kilobytes. A dd command in under 100 bytes.

Now some of the people behind Atom wrote Zed.

It's 10% of the size and 10x the speed, in part because it's a native Rust app.

The COSMIC desktop looks like GNOME, works like GNOME Shell, but it's smaller and faster and more customisable because it's native Rust code.

GNOME Shell is Javascript running on an embedded copy of Mozilla's Javascript runtime.

Just like dotcoms wanted to dis-intermediate business, remove middlemen and distributors for faster sales, we could use disintermediation in our software. Fewer runtimes, better smarter compiled languages so we can trap more errors and have faster and safer compiled native code.

Smaller, simpler, cleaner, fewer layers, less abstractions: these are all goods things which are desirable.

Dennis Ritchie and Ken Thompson knew this. That's why Research Unix evolved into Plan 9, which puts way more stuff through the filesystem to remove whole types of API. Everything's in a container all the time, the filesystem abstracts the network and the GUI and more. Under 10% of the syscalls of Linux, the kernel is 5MB of source, and yet it has much of Kubernetes in there.

Then they went further, replaced C too, made a simpler safer language, embedded its runtime right into the kernel, and made binaries CPU-independent, and turned the entire network-aware OS into a runtime to compete with the JVM, so it could run as a browser plugin as well as a bare-metal OS. Now we have ubiquitous virtualisation so lean into it: separate domains. If your user-facing OS only runs in a VM then it doesn't need a filesystem or hardware drivers, because it won't see hardware, only virtualised facilities, so rip all that stuff out. Your container host doesn't need to have a console or manage disks.

This is what we should be doing. This is what we need to do. Hack away at the code complexity. Don't add functionality, remove it. Simplify it. Enforce standards by putting them in the kernel and removing dozens of overlapping implementations. Make codebases that are smaller and readable by humans.

Leave the vast bloated stuff to commercial companies and proprietary software where nobody gets to read it except LLM bots anyway.