Remix.run Logo
Fiveplus 7 hours ago

>The goal is, that xfwl4 will offer the same functionality and behavior as xfwm4 does...

I wonder how strictly they interpret behavior here given the architectural divergence?

As an example, focus-stealing prevention. In xfwm4 (and x11 generally), this requires complex heuristics and timestamp checks because x11 clients are powerful and can aggressively grab focus. In wayland, the compositor is the sole arbiter of focus, hence clients can't steal it, they can only request it via xdg-activation. Porting the legacy x11 logic involves the challenge of actually designing a new policy that feels like the old heuristic but operates on wayland's strict authority model.

This leads to my main curiosity regarding the raw responsiveness of xfce. On potato hardware, xfwm4 often feels snappy because it can run as a distinct stacking window manager with the compositor disabled. Wayland, by definition forces compositing. While I am not concerned about rust vs C latency (since smithay compiles to machine code without a GC), I am curious about the mandatory compositing overhead. Can the compositor replicate the input-to-pixel latency of uncomposited x11 on low-end devices or is that a class of performance we just have to sacrifice for the frame-perfect rendering of wayland?

pjmlp 6 hours ago | parent | next [-]

At least they are honest regarding the reasons, not a wall of text to justify what bails down to "because I like it".

Naturally these kinds of having a language island create some attrition regarding build tooling, integration with existing ecosystem and who is able to contribute to what.

So lets see how it evolves, even with my C bashing, I was a much happier XFCE user than with GNOME and GJS all over the place.

amazari 5 hours ago | parent [-]

You know that all the Wayland primitives, event handling and drawing in gnome-shell are handled in C/native code through Mutter, right ? The JavaScript in gnome-shell is the cherry on top for scripting, similar to C#/Lua (or any GCed language) in game engines, elisp in Emacs, event JS in QtQuick/QML.

It is not the performance bottleneck people seem to believe.

ChocolateGod an hour ago | parent | next [-]

It has been the case that stalls in the GJS land can stall the compositor though, especially if it's during a GC cycle.

pjmlp 5 hours ago | parent | prev [-]

I can dig out the old GNOME tickets and related blog posts...

Implementation matters, including proper use of JIT/AOT toolchains.

joe_mamba 2 hours ago | parent [-]

>I can dig out the old GNOME tickets and related blog posts...

That's the easiest way you can win any argument on gnome. You're going straight for the nuclear option.

badsectoracula 4 hours ago | parent | prev | next [-]

One thing to keep in mind is that composition does not mean you have to do it with vsync, you can just refresh the screen the moment a client tells you the window has new contents.

mikkupikku an hour ago | parent | prev | next [-]

Compositor overhead even with cheapo Intel laptop graphics is basically a non-issue these days. The people still rocking their 20 year old thinkpads might want to choose something else, but besides that kind of user I don't think it's worth worrying too much about.

jchw 5 hours ago | parent | prev | next [-]

> Can the compositor replicate the input-to-pixel latency of uncomposited x11 on low-end devices or is that a class of performance we just have to sacrifice for the frame-perfect rendering of wayland?

I think this is ultimately correct. The compositor will have to render a frame at some point after the VBlank signal, and it will need to render with it the buffers on-screen as of that point, which will be from whatever was last rendered to them.

This can be somewhat alleviated, though. Both KDE and GNOME have been getting progressively more aggressive about "unredirecting" surfaces into hardware accelerated DRM planes in more circumstances. In this situation, the unredirected planes will not suffer compositing latency, as their buffers will be scanned out by the GPU at scanout time with the rest of the composited result. In modern Wayland, this is accomplished via both underlays and overlays.

There is also a slight penalty to the latency of mouse cursor movement that is imparted by using atomic DRM commits. Since using atomic DRM is very common in modern Wayland, it is normal for the cursor to have at least a fraction of a frame of added latency (depending on many factors.)

I'm of two minds about this. One, obviously it's sad. The old hardware worked perfectly and never had latency issues like this. Could it be possible to implement Wayland without full compositing? Maybe, actually. But I don't expect anyone to try, because let's face it, people have simply accepted that we now live with slightly more latency on the desktop. But then again, "old" hardware is now hardware that can more often than not, handle high refresh rates pretty well on desktop. An on-average increase of half a frame of latency is pretty bad with 60 Hz: it's, what, 8.3ms? But half a frame at 144 Hz is much less at somewhere around 3.5ms of added latency, which I think is more acceptable. Combined with aggressive underlay/overlay usage and dynamic triple buffering, I think this makes the compositing experience an acceptable tradeoff.

What about computers that really can't handle something like 144 Hz or higher output? Well, tough call. I mean, I have some fairly old computers that can definitely handle at least 100 Hz very well on desktop. I'm talking Pentium 4 machines with old GeForce cards. Linux is certainly happy to go older (though the baseline has been inching up there; I think you need at least Pentium now?) but I do think there is a point where you cross a line where asking for things to work well is just too much. At that point, it's not a matter of asking developers to not waste resources for no reason, but asking them to optimize not just for reasonably recent machines but also to optimize for machines from 30 years ago. At a certain point it does feel like we have to let it go, not because the computers are necessarily completely obsolete, but because the range of machines to support is too wide.

Obviously, though, simply going for higher refresh rates can't fix everything. Plenty of laptops have screens that can't go above 60 Hz, and they are forever stuck with a few extra milliseconds of latency when using a compositor. It is unideal, but what are you going to do? Compositors offer many advantages, it seems straightforward to design for a future where they are always on.

drob518 4 hours ago | parent | next [-]

Love your post. So, don’t take this as disagreement.

I’m always a little bewildered by frame rate discussions. Yes, I understand that more is better, but for non-gaming apps (e.g. “productivity” apps), do we really need much more than 60 Hz? Yes, you can get smoother fast scrolling with higher frame rate at 120 Hz or more, but how many people were complaining about that over the last decade?

array_key_first an hour ago | parent | next [-]

I never complained about 60, then I went to 144 and 60 feels painful now. The latency is noticable in every interaction, not just gaming. It's immediately evident - the computer just feels more responsive, like you're in complete control.

Even phones have moved in this direction, and it's immediately noticable when using it for the first time.

I'm now on 240hz and the effect is very diminished, especially outside of gaming. But even then I notice it, although stepping down to 144 isn't the worst. 60, though, feels like ice on your teeth.

drob518 43 minutes ago | parent [-]

Did you use the same computer at both 60 and 144? I have no doubt that 144 feels smoother for scrolling and things like that. It definitely should. But if you upgraded your system at the same time you upgraded your display, much of the responsiveness would be due to a faster system.

eptcyka 17 minutes ago | parent [-]

I have a projector that can project 4k at 60hz or 1080p at 240, and I can really notice it by just moving the cursor around. I don’t need to render my games anywhere near 240 to notice that too. Same with phones - moving from pixel 3 to pixel 5, scrolling through settings or the home screen was a palpable difference. Pixel 3 now feels broken. It is not.m, it just renders at 60 instead of 90 fps.

jeroenhd 3 hours ago | parent | prev | next [-]

I enjoy working on my computer more at 144Hz than 60Hz. Even on my phone, the switch from 60Hz to a higher frame rate is quite obvious. It makes the entire system feel more responsive and less glitchy. VRR also helps a lot in cases where the system is under load.

60Hz is actually a downgrade from what people were used to. Sure, games and such struggled to get that kind of performance, but CRT screens did 75Hz/85Hz/100Hz quite well (perhaps at lower resolutions, because full-res 1200p sometimes made text difficult to read on a 21 inch CRT, with little benefit from the added smoothness as CRTs have a natural fuzzy edge around their straight lines anyway).

There's nothing about programming or word processing that requires more than maybe 5 or 6 fps (very few people type more than 300 characters per minute anyway) but I feel much better working on a 60 fps screen than I do a 30 fps one.

Everyone has different preferences, though. You can extend your laptop's battery life by quite a bit by reducing the refresh rate to 30Hz. If you're someone who doesn't really mind the frame rate of their computer, it may be worth trying!

rabf 2 hours ago | parent [-]

CRT screens did 75Hz/85Hz/100Hz quite well, but rendered only one pixel/dot at a time. This is in no way equivalent to 60Hz on a flat panel!

elektronika 4 hours ago | parent | prev | next [-]

> how many people were complaining about that over the last decade?

Quite a few. These articles tend to make the rounds when it comes up: https://danluu.com/input-lag/ https://lwn.net/Articles/751763/ Perception varies from person to person, but going from my 144hz monitor to my old 60hz work laptop is so noticeable to me that I switched it from a composited wayland DE to an X11 WM.

drob518 3 hours ago | parent [-]

Input lag is not the same as refresh rate. 60 Hz is 16.7 ms per frame. If it takes a long time for input to appear on screen it’s because of the layers and layers of bloat we have in our UI systems.

wtallis an hour ago | parent [-]

Refresh rate directly affects one of the components of total input lag, and increasing refresh rate is one of the most straightforward ways for an end user to chip away at that input lag problem.

drob518 42 minutes ago | parent [-]

Well, sure. But so is buying a faster processor.

eptcyka 16 minutes ago | parent [-]

Naive triple buffering shall not be defeated by a faster CPU.

jchw 4 hours ago | parent | prev | next [-]

Essentially, the only reason to go over 60 Hz for desktop is for a better "feel" and for lower latency. Compositing latency is mainly centered around frames, so the most obvious and simplest way to lower that latency is to shorten how long a frame is, hence higher frame rates.

However, I do think that high refresh rates feel very nice to use even if they are not strictly necessary. I consider it a nice luxury.

drob518 4 hours ago | parent [-]

Fair

layer8 4 hours ago | parent | prev | next [-]

I agree. Keyboard-action-to-result-on-screen latency is much more important, and we are typically way above 17 ms for that.

drob518 3 hours ago | parent [-]

Yep, agreed, though it’s not just keyboard to screen. It’s also mouse click to screen. Really, any event to screen.

layer8 2 hours ago | parent [-]

Initially I wrote “input device”, but since mouse movements aren’t generally a problem, I narrowed it to “keyboard”. ;) Mouse clicks definitely fall into the same category, though.

bee_rider 4 hours ago | parent | prev [-]

If our mouse cursors are going to have half a frame of latency, I guess we will need 60Hz or 120Hz desktops, or whatever.

I dunno. It does seem a bit odd, because who was thinking about the framerates of, like, desktops running productivity software, for the last couple decades? I guess I assumed this would never be a problem.

jchw 3 hours ago | parent | next [-]

Mouse cursor latency and window compositing latency are two separate things. I probably did not do a good enough job conveying this. In a typical Linux setup, the mouse cursor gets its own DRM plane, so it will be rendered on top of the desktop during scanout right as the video output goes to the screen.

There are two things that typically impact mouse cursor latency, especially with regards to Wayland:

- Software-rendering, which is sometimes used if hardware cursors are unavailable or buggy for driver/GPU reasons. In this case the cursor will be rendered onto the composited desktop frame and thus suffer compositor latency, which is tied to refresh rate.

- Atomic DRM commits. Using atomic DRM commits, even the hardware-rendered cursors can suffer additional latency. In this case, the added latency is not necessarily tied to frame times or refresh rates. Instead, its tied to when during the refresh cycle the atomic commit is sent; specifically, how close to the deadline. I think in most cases we're talking a couple milliseconds of latency. It has been measured before, but I cannot find the source.

Wayland compositors tend to use atomic DRM commits, hence a slightly more laggy mouse cursor. I honestly couldn't tell you if there is a specific reason why they must use atomic DRM, because I don't have knowledge that runs that deep, only that they seem to.

drob518 4 hours ago | parent | prev [-]

Mouse being jumpy shouldn’t be related to refresh rate. The mouse driver and windowing system should keep track of the mouse position regardless of the video frame rate. Yes, the mouse may jump more per frame with a lower frame rate, but that should only be happening when you move the mouse a long distance quickly. Typically, when you do that, you’re not looking at the mouse itself but at the target. Then, once you’re near it, you slow down the movement and use fine motor skills to move it onto the target. That’s typically much slower and frame rate won’t matter much because the motion is so much smaller.

michaelmrose 3 hours ago | parent | prev [-]

I couldn't find ready stats on what percentage of displays are 60 hz but outside of gaming and high end machines I suspect 60 hz is still the majority of of machines used by actual users meaning we should evaluate the latency as it is observed by most users.

simoncion 7 hours ago | parent | prev | next [-]

> ...or is that a class of performance we just have to sacrifice for the frame-perfect rendering of wayland?

I think I know what "frame perfect" means, and I'm pretty sure that you've been able to get that for ages on X11... at least with AMD/ATi hardware. Enable (or have your distro enable) the TearFree option, and there you go.

I read somewhere that TearFree is triple buffering, so -if true- it's my (perhaps mistaken) understanding that this adds a frame of latency.

wtallis 5 hours ago | parent | next [-]

> I read somewhere that TearFree is triple buffering, so -if true- it's my (perhaps mistaken) understanding that this adds a frame of latency.

True triple buffering doesn't add one frame of latency, but since it enforces only whole frames be sent to the display instead of tearing, it can cause partial frames of latency. (It's hard to come up with a well-defined measure of frame latency when tearing is allowed.)

But there have been many systems that abused the term "triple buffering" to refer to a three-frame queue, which always does add unnecessary latency, making it almost always the wrong choice for interactive systems.

gryn 3 hours ago | parent | prev [-]

only on the primary display. once you had more than one display there were only workarounds.

imcritic 6 hours ago | parent | prev | next [-]

Xfce / xfwm4 doesn't offer focus stealing prevention.

stonogo 5 hours ago | parent [-]

Settings -> Window Manager Tweaks -> Focus -> Activate focus stealing prevention

https://gitlab.xfce.org/xfce/xfwm4/-/blob/master/settings-di...

PunchyHamster 6 hours ago | parent | prev [-]

> Can the compositor replicate the input-to-pixel latency of uncomposited x11 on low-end devices or is that a class of performance we just have to sacrifice for the frame-perfect rendering of wayland?

well, the answer is just no, wayland has been consistently slower than X11 and nothing running on top can't really go around that

ca1f 6 hours ago | parent | next [-]

Can you cite any sources for that claim? I found this blog post that says wayland is pretty much on par with X11 except for XWayland, which should be considered a band-aid only anyways: https://davidjusto.com/articles/m2p-latency/

Rogach 4 hours ago | parent [-]

Here's one article: https://mort.coffee/home/wayland-input-latency/

It's specifically about cursor lag, but I think that's because it's more difficult to experimentally measure app rendering latency.

happymellon 5 hours ago | parent | prev | next [-]

> wayland has been consistently slower than X11

Wayland is a specification, it has an inability to be "faster" than other options. That's like saying JSON is 5% slower than Word.

And as for the implementations being slower than X, that also doesn't reflect reality.

https://www.phoronix.com/review/ubuntu-2504-x11-gaming

michaelmrose 6 hours ago | parent | prev [-]

There is no Wayland to run on top of as its a standard to implement rather than a server to talk to.