| ▲ | ferguess_k 2 hours ago | |
I think back then, due to the scarcity of RAM and HDD, developers, especially elite developers working for Apple/Microsoft/Borland/whatever really went for the last mile to squeeze as much performance as they could -- or, at least they spent way more time on this comparing to modern day developers -- even for the same applications (e.g. some native Windows programs on Win 2000 v.s. the re-written programs on Win 11). Nowadays businesses simply don't care. They already achieved the feudal-ish bastion they have dreamed about, and there is no "business value" to spend too much time on it, unless ofc if it is something performance related, like A.I. or Supercomputing. On the other hand, hardware today is 100X more complicated than the NeXTStep/Intel i486 days. Greybeards starting from the 70s/80s can gradually adapt to the complexity, while newcomers simply have to swim or die -- there is no "training" because any training on a toy computer or a toy OS is useless comparing to the massive architecture and complexity we face today. I don't know. I wish the evolution of hardware is slower, but it's going to get to the point anyway. I recently completed the MIT xv6 labs and thought I was good enough to hack the kernel a bit, so I took another Linux device driver class, and OMG the complexity is unfathomable -- even the Makefile and KBuild stuffs are way way beyond my understanding. But hey, if I started from Linux 0.95, or maybe even Linux 1.0, I'd have much elss trouble to drill into a subsystem, and gradually adapt. That's why I think I need to give myself a year or two of training to scroll back to maybe Linux 0.95, and focus on just a simpler device driver (e.g. keyboard), and read EVERY evolution. There is no other way for commoners like us. | ||