Remix.run Logo
munchler a day ago

I’m sure this is a dumb question, but why does a code editor need to render on the GPU like a video game? Is it just for niceties like smooth scrolling?

delta_p_delta_x a day ago | parent | next [-]

> but why does a code editor need to render on the GPU like a video game?

It isn't just text editors—nowadays, everything renders on your GPU, even your desktop and terminal (unless you're on a tty). For example, at the bottom of Chromium, Electron, and Avalonia's graphics stack is Skia, which is a cross-platform GPU-accelerated windowing and 2D graphics library.

GPU compositing is what allows transparency, glass effects, shadowing, and it makes actually writing these programs much easier, as everything is the same interface and uses the same rendering pipeline as everything else.

A window in front of another, or a window partially outside the display? No big deal, just set the 3D coordinates, width, and height correctly for each window, and the GPU will do hidden-surface removal and viewing frustum clipping automatically and for free, no need for any sorting. Want a 'preview' of the live contents of each window in a task bar or during Alt-Tab, like on Windows 7? No problem, render each window to a texture and sample it in the taskbar panels' smaller viewports. Want to scale or otherwise squeeze/manipulate the contents of each window during minimise/maximise, like macOS does? Easy, write a shader.

This was a big deal in the early 2000s when GPUs finally had enough raw compute to always run everything, and basically every single OS and compositor switched to GPU rendering roughly in the same timeline—Quartz Extreme on Mac OS X, DWM.exe on Windows, and Linux's variety of compositors, including KWin, Compiz, and more.

There's a reason OSs from that time frame had so many glassy, funky effects—this was primarily to show off just how advanced their GPU-powered compositors were, and this was also a big reason why Windows Vista fell so hard on its face—its compositor was especially hard on the scrawny integrated GPUs of the time, enough that two themes—Aero Basic, and Aero Glass—had to be released for different GPUs.

munchler a day ago | parent [-]

Thanks. That explains why OSs use the GPU for rendering windows and effects, but it's still not clear to me why a code editor would do the same. The features you list (transparency, glass effects, shadowing, window management, etc.) seem to be outside the purview of a text editor.

If you're saying that Zed is built on something like Skia, then it would already be cross-platform and not have to worry about Vulkan vs. DirectX, right?

delta_p_delta_x a day ago | parent [-]

> but it's still not clear to me why a code editor would do the same.

Happy to elaborate further.

Old school text rendering began with a table of character codes to actual, fixed-size bitmaps (this was a font), and rendering was straightforward: divide the framebuffer resolution by the bitmap resolution, clip/wrap the remaining, just place the bitmaps into the resultant grid, and pipe the framebuffer to the display. Done.

Nowadays, text editors don't just have text; they have markup like highlighting and syntax colouring (with 24-bit deep-colour, rather than the ANSI 16 colour codes), go-to, version control annotations, debug breakpoints, hover annotations, and in the case of 'notebooks' like Python notebooks, may have embedded media like images, videos, and even 3D renders. Many editor features may open pop-up windows or dialogue boxes, which will probably occlude the text 'behind'.

Now, most modern text editors also expect to work with non-bitmapped, non-monospaced typefaces in OpenType or TrueType format. These are complex beasts of their own with hinting, ligatures, variable weights, and more, and may even embed entire programs. They are usually Bezier/polynomial splines that the GPU can rasterise easily in hardware (no special shader required). After this rasterisation, any reasonable text editor will apply anti-aliasing, which is also work delegated to the GPU. There is probably a different algorithm for text (which needs to account for display subpixel layouts) versus UI elements (which may not).

The point I am driving at is that the proliferation of features expected from a modern text editor means that using a GPU for all of this is a natural evolution. As users, we may think 'it's just text' but from the perspective of the developer or the hardware, text and a ray-traced 3D game is no different: it's one 4D square matrix multiplied by another, one after another, and in the end reduced into a three-vector, representing the colour of a pixel.

> If you're saying that Zed is built on something like Skia, then it would already be cross-platform and not have to worry about Vulkan vs. DirectX, right?

Absolutely, because Skia handles that for the developer. And I suspect the reason why Zed didn't use Skia in the first place is ideological (Skia is by Google, written in C++), together with wanting to write 'the whole world' in Rust.

pjmlp 20 hours ago | parent [-]

Last point is kind of ironic, given that Metal is Objective-C with C++14 as shading language, Swift bidings, and a light C++ wrapper lib, Vulkan is C99 (tutorials use the C++20 bindings), DirectX is C++ with a COM based API.

leecommamichael a day ago | parent | prev | next [-]

It doesn't need to. It's typical to do this these days, but they could still arrange all of the pixels on the CPU and then blit it onto the screen. There's an API to do so in every major OS.

Since it's more than quick enough to do this on the CPU, they're likely doing it for things like animations and very high quality font rendering. There's image-processing going on when you really care about quality; oversampling and filtering.

I suspect one could do most everything Zed does without a GPU, but about 10 to 20% uglier, depending on how discerning the user is on such things.

ben-schaaf a day ago | parent [-]

> Since it's more than quick enough to do this on the CPU

This is true until it isn't. A modern-ish CPU at 1080p 60hz it'll be fine. At 4k 120hz even the fastest CPU on the market won't keep up. And then there's 8k.

> they're likely doing it for things like animations and very high quality font renderin

Since they're using native render functions this probably isn't the case.

leecommamichael a day ago | parent [-]

I’m almost nerd-sniped enough to try and see exactly where it breaks down.

What’s a native render function? Do you mean just using a graphics API as opposed to an off-the-shelf UI library?

ben-schaaf a day ago | parent [-]

> What’s a native render function?

As in using DirectWrite or GDI on Windows; or Core Text on macOS. As opposed to shipping your own glyph rasterizer.

hoistbypetard 17 hours ago | parent [-]

Doesn't the blog post specifically say they are shipping their own glyph rasterizer?

ben-schaaf 5 hours ago | parent [-]

No?

> To work around this limitation, we decided to stop using Direct2D and switch to rasterizing glyphs using DirectWrite instead.

starkrights 16 hours ago | parent | prev [-]

The zed blog has an early post[0] talking some about their decision. Mainly just decrying their experience of impossible-to-meet timing deadlines for something as basic as 60fps on electron.

It doesnt really do a tech breakdown of why it’d be impossible CPU side, but mentions a couple of things about their design process for it.

[0]: https://zed.dev/blog/videogame