Remix.run Logo
jeroenhd 16 hours ago

One thing that's nearly impossible to replicate on modern systems is the extremely tight feedback loop these TUIs had. Keyboard latency was near non-existent while basic calculators these days will happily take a hundred milliseconds to process a key press.

We don't need to go back to the 66MHz era, but it's embarrassing that programs running on a dozen computer cores all executing at several gigahertz feel less responsive than software written half a century ago. Sure, compiling half a gigabyte of source code now finishes before the end of the year, but I rarely compile more than a hundred or new lines at a time and the process of kickstarting the compiler takes much longer than actual compilation.

A terminal is no more than a rendering environment. With some workarounds (a custom renderer and input loop most likely), you can probably compile Zed to run in a FreeDOS in the same environment you use to run Turbo Pascal. I doubt you'll get the same responsiveness, though.

badsectoracula 16 hours ago | parent | next [-]

AFAIK Borland C++ (even on Windows) used to read the source from whatever editor buffers you had already in the IDE and since the compiler was part of the IDE, it cached various states in memory, which is why it was so fast (for a C/C++ compiler anyway - Delphi was much faster) even on slow hardware. Meanwhile Visual C++ (and modern IDEs) had you autosave the file to disk so the compiler, that was launched as a separate program (often for each file), could read it (and rebuild its internal state from scratch for every single file).

dapperdrake 16 hours ago | parent [-]

From what I remember researching it really js this.

Today, Python, Rlang, PHP, Java, and Lisp bring these features. But not C. Oh the irony.

somat 15 hours ago | parent [-]

C does as well, that is half the point of make. When building a large C project it will first create a bunch of object files from the source files then link them into an executable. make then keeps track of what source files have changed and rebuilds only those object files. The first build is slow, subsequent builds are much faster. only needing to compile one file then link them.

At least that's the theory, in reality make has a lot of warts and implementing a good solid make file is an art. Don't even get me started on the horrors of automake, perhaps I just need to use it in one of my own projects but as someone who primarily ports others code, I hate it with a passion. It is so much easier when a project just sticks with a hand crafted makefile.

For completeness: The other half of make is to implement the rest of the build process.

uecker 13 hours ago | parent | next [-]

I would say autoconf/automake is not really useful anymore and probably somebody should just establish a new and simplified standardized setup and Makefile for C projects.

And yes, efficient separate and incremental compilation is major advantage of C. I do not understand why people criticize this. It works beautifully. I also think it is good that the language and build system are separate.

badsectoracula 11 hours ago | parent | prev [-]

I think you probably refer to something else than what i meant in my post above.

Borland C++ had the compiler as part of the IDE (there was also a separate command-line version, but it was also compiled as part of the IDE). This allowed the IDE to not spawn separate processes for each file nor even need to hit the disk - the compiler (which was already in RAM as part of the IDE's process) would read the source code from the editor's buffer (instead of a file, so again, no hitting the disk) and would also keep a bunch of other stuff in memory between builds instead of reading it.

This approach allows the compiler to reuse data not only between builds but also between files of the same build. Meanwhile make is just a program launcher, the program - the compiler - need to run for each file and load and parse everything it needs to work for every single source file it needs to compile, thus rebuilding and destroying its entire universe for each file separately. There is no reuse here - even when you use precompiled headers to speed up some things (which is something Borland C++ also supported and it did speed up things even more on an already fast system), the compiler still needs to build and destroy that universe.

It is not a coincidence that one of the ways nowadays to speed up compilation of large codebases is unity builds[0] which essentially combine multiple C/C++ files (the files need to be aware of it to avoid one file "polluting" the contents of another) to allow multiple compilation units reuse/share the compilation state (such as common header files) with a single compiler instance. E.g. it is a core feature of FASTbuild[1] which combines distributed builds, caching and unity builds.

Of course Borland C++'s approach wasn't perfect as it had to run with limited memory too (so it still had to hit the disk at some point - note though that the Pascal compilers could do everything in memory, including even the final linking, even the program could remain in memory). Also bugs in the compiler could linger, e.g. i remember having to restart Borland C++ Builder sometimes every few hours of using it because the compiler was confused about something and had cached it in memory between builds. Also Free Pascal's text mode IDE (shown in the article) has the Free Pascal compiler as part of the IDE itself, but in the last release (i think) there is a memory leak and the IDE's use keeps increasing little by little every time you build, which is something that wouldn't matter with a separate program (and most people use FPC as a separate program via Lazarus these days, which is most likely why nobody noticed the leak).

[0] https://en.wikipedia.org/wiki/Unity_build

[1] https://fastbuild.org/

constantcrying 16 hours ago | parent | prev [-]

>One thing that's nearly impossible to replicate on modern systems is the extremely tight feedback loop these TUIs

Why? Yes, VSCode is slow. But Zed and many neovim GUIs are extremely responsive. Why would achieving that even be impossible or even that hard? You "just" need software which is fast enough to render the correct output the frame after the input. In an age where gaming is already extremely latency sensitive, why would having a text editor with similar latency performance be so hard?

Do you have any actual evidence that zed or neovide are suffering from latency problems? And why would putting a terminal in the middle help in any way in reducing that latency?

jeroenhd 14 hours ago | parent [-]

I'm not sure if you know what "terminal" means. I'm not talking about terminal emulators (the "terminal" program on macOS/Linux/Android/etc.) but actual, real terminals. The "terminal" is a text mode rendering mechanism built into computers of the terminal era. The closest modern operating systems come to it is the terminal-like environment you can get on Linux or the *BSDs by disabling the GUI, but even those merely emulate text mode, they still contain the stacks upon stacks of timers and necessary to process input peripherals.

The problem is the entire software stack between the keyboard and the display. From USB polling to driver loops and GPU callbacks, the entire software stack has become incredibly asynchronous, making it trivial for computers to miss a frame boundary. Compared to DOS or similar environments, where applications basically took control over the entire CPU and whatever peripherals it knew to access, there are millions of small points where inefficiencies can creep in. Compare that to the hardware interrupts and basic processor I/O earlier generations of computers used, where entered keys were in a CPU buffer before the operating system even knew what was happening.

VSCode isn't even that slow, really. I don't find it to be any slower than Zed, for instance. Given the technology stack underneath VSCode, that's an impressive feat by the Microsoft programmers. But the kind of performance TUI programs of yore got for free just isn't available to user space applications anymore without digging into low-level input APIs and writing custom GPU shaders.

In small part, CRTs running at 70Hz or 85Hz back in the mid-80s, as well as the much smoother display output of CRTs versus even modern LCDs, made for a much better typing experience.

anthk 10 hours ago | parent [-]

PS2 keyboards and mice had direct interrupts thru IRQ's.