Remix.run Logo
Are we stuck with the same Desktop UX forever? [video](youtube.com)
120 points by joelkesler 8 hours ago | 125 comments
jhhh 4 hours ago | parent | next [-]

I understand the desire to want to fix user pain points. There are plenty to choose from. I think the problem is that most of the UI changes don't seem to fix any particular issue I have. They are just different, and when some changes do create even more problems there's never any configuration to disable them. You're trying to create a perfect, coherent system for everyone absent the ability to configure it to our liking. He even mentioned how unpopular making things configurable is in the UI community.

A perfect pain point example was mentioned in the video: Text selection on mobile is trash. But each app seems to have different solutions, even from the same developer. Google Messages doesn't allow any text selection of content below an entire message. Some other apps have opted in to a 'smart' text select which when you select text will guess and randomly group select adjacent words. And lastly, some apps will only ever select a single word when you double tap which seemed to be the standard on mobile for a long time. All of this is inconsistent and often I'll want to do something like look up a word and realize oh I can't select the word at all (G message), or the system 'smartly' selected 4 words instead, or that it did what I want and actually just picked one word. Each application designer decided they wanted to make their own change and made the whole system fragmented and worse overall.

PunchyHamster 38 minutes ago | parent | next [-]

> He even mentioned how unpopular making things configurable is in the UI community.

Inability to imagine someone might have different idea about what's useful is general plague of UI/UX industry. And there seem to be zero care given to usage by user that have to use the app longer than 30 seconds a day. Productivity vs learning time curve is basically flat, and low, with exception being pretty much "the tools made by X for X" like programming IDEs

porkbrain 3 hours ago | parent | prev | next [-]

Text selection used to be frustrating on mobile for me too until Google fixed it with OCR. I get to just hold a button briefly and then can immediately select an area of the screen to scan text from, with a consistent UX. Like a screenshot but for text.

taskforcegemini 2 hours ago | parent | next [-]

They are using OCR for selecting plain text?

eastbound 32 minutes ago | parent | next [-]

On iPhone too, taking a screenshot is the single reliable way to select text.

AlienRobot 2 hours ago | parent | prev [-]

At least it's not AI... yet.

xnx an hour ago | parent [-]

Multi-modal LLMs like Gemini are better than traditional OCR in most ways.

supportengineer 2 hours ago | parent | prev [-]

That’s how I do it on the iPhone as well. I take a screen shot first.

You can count on it, it is reliable, it always works.

diziet_sma 3 hours ago | parent | prev [-]

Universal search on Google Pixels has solved a lot of the text selection problems on Android for me, with the exception being selecting text which requires scrolling.

linguae 5 hours ago | parent | prev | next [-]

I enjoyed this talk, and I want to learn more about the concept of “learning loops” for interface design.

Personally, I wish there were a champion of desktop usability like how Apple was in the 1980s and 1990s. I feel that Microsoft, Apple, and Google lost the plot in the 2010s due to two factors: (1) the rise of mobile and Web computing, and (2) the realization that software platforms are excellent platforms for milking users for cash via pushing ads and services upon a captive audience. To elaborate on the first point, UI elements from mobile and Web computing have been applied to desktops even when they are not effective, probably to save development costs, and probably since mobile and Web UI elements are seen as “modern” compared to an “old-fashioned” desktop. The result is a degraded desktop experience in 2025 compared to 2009 when Windows 7 and Snow Leopard were released. It’s hamburger windows, title bars becoming toolbars (making it harder to identify areas to drag windows), hidden scroll bars, and memory-hungry Electron apps galore, plus pushy notifications, nag screens, and ads for services.

I don’t foresee any innovation from Microsoft, Apple, or Google in desktop computing that doesn’t have strings attached for monetization purposes.

The open-source world is better positioned to make productive desktops, but without coordinated efforts, it seems like herding cats, and it seems that one must cobble together a system instead of having a system that works as coherently as the Mac or Windows.

With that said, I won’t be too negative. KDE and GNOME are consistent when sticking to Qt/GTK applications, respectively, and there are good desktop Linux distributions out there.

gtowey 5 hours ago | parent | next [-]

It's because companies are no longer run by engineers. The MBAs and accountants are in charge and they could care less about making good products.

At Microsoft, Satya Nadella has an engineering background, but it seems like he didn't spend much time as an engineer before getting an MBA and playing the management advancement game.

Our industry isn't what it used to be and I'm not sure it ever could.

linguae 4 hours ago | parent | next [-]

I feel a major shift happened in the 2010s. The tech industry became less about making the world a better place through technology, and more about how to best leverage power to make as much money as possible, making a world a better place be damned.

This also came at a time when tech went from being considered a nerdy obsession to tech being a prestigious career choice much like how law and medicine are viewed.

Tech went from being a sideshow to the main show. The problem is once tech became the main show, this attracts the money- and career-driven rather than the ones passionate about technology. It’s bad enough working with mercenary coworkers, but when mercenaries become managers and executives, they are now the boss, and if the passionate don’t meet their bosses’ expectations, they are fired.

I left the industry and I am now a tenure-track community college professor, though I do research during my winter and summer breaks. I think there are still niches where a deep love for computing without being overly concerned about “stock line go up” metrics can still lead to good products and sustainable, if small, businesses.

jack_tripper 4 hours ago | parent [-]

>The tech industry became less about making the world a better place through technology

When the hell was even that?

vjvjvjvjghv 3 hours ago | parent | next [-]

In the 80s and 90s there was much more idealism than now. There were also more low hanging fruit to develop software that makes people’s lives better. There was also less investor money floating around so it was more important to appeal to end users. To me it seems tech has devolved into a big money making scheme with only the minimum necessary actual technology and innovation.

lo_zamoyski 2 hours ago | parent [-]

I would agree that it was different, but I also think this may be history viewed through rose-tinted glasses somewhat.

> There were also more low hanging fruit to develop software that makes people’s lives better.

In principle, maybe. In practice, you had to pay for everything. Open source or free software was not widely available. So, the profit motive was there. The conditions didn’t exist yet for the profit model we have today to really take off, or for the appreciation of it to exist. Still, if there’s a lot of low-hanging fruit, that means the maturity of software was generally lower, so it’s a bit like pining for the days when people lived on the farm.

> There was also less investor money floating around so it was more important to appeal to end users.

I’m not so sure this appeal was so important (and investors do care about appeal!). If you had market dominance like Microsoft did, you could rest on your laurels quite a bit (and that they did). The software ecosystem you needed to use also determined your choices for you.

> To me it seems tech has devolved into a big money making scheme with only the minimum necessary actual technology and innovation.

As I said earlier, the profit motive was always there. It was just expressed differently. But I will grant you that the image is different. In a way, the mask has been dropped. When facebook was new, no one thought of it as a vulgar engine for monetizing people either (I even recall offending a Facebook employee years ago when I mentioned this, what should frankly have been obvious), but it was just that. It was all just that, because the basic blueprint of the revenue model was there from day one.

corysama 33 minutes ago | parent | prev | next [-]

A trope in the first season of HBO’s Silicon Valley is literally every company other than the main characters professing their mission statement to be “Making the world a better place through (technobabble)”

The subtle running joke was that while the main characters technobabble was fake, every other background SV startup was “Making the world a better place through Paxos-based distributed consensus” and other real world serious tech.

mc32 3 hours ago | parent | prev [-]

Things like hypertext, search, email and early social networks (chat networks connecting disparate people) and also the paperless office (finally). Images and video corrupted everything as they now became that which addicted eyeballs.

lo_zamoyski 2 hours ago | parent [-]

> chat networks

I think you may be looking at history through rose-tinted glasses. Sure, social media today is not the same, so the comparison isn’t quite sensible, but IRC was an unpleasant place full of petty egos and nasty people.

vjvjvjvjghv 3 hours ago | parent | prev | next [-]

I have heard a big factor is that a lot of the newer devs don’t really use desktop OS outside of work. So for them developing a desktop OS is more of an abstract project like for me developing software for medical devices which I never use myself.

Normal_gaussian 5 hours ago | parent | prev [-]

It's great to hear from someone who thinks these people still care! It has rarely been my experience, but I haven't been everywhere yet.

XorNot 42 minutes ago | parent | prev [-]

GTK's dedication to killing the standard top bar menu layout is on intensely irritating.

We now have giant title bars to accommodate the hamburger menu button, which opens a list of...standard menu bar sub menu options.

You could fit all the same information into the same real estate space, using the original and tested paradigm.

scottjenson a day ago | parent | prev | next [-]

I've given dozens of talks, but this one seems to have struck a chord, as it's my most popular video in quite a while. It's got over 14k views in less than a day.

I'm excited so many people are interested in desktop UX!

ChuckMcM 3 hours ago | parent | next [-]

I think you did a great job of bringing fairly nuanced problems into perspective for a lot of people who take their interactions with their phone/computer/tablet for granted. That is a great skill!

I think an fertile area for investigation would also be 'task specific' interactions. In XDE[1], the thing that got Steve Jobs all excited, the interaction models are different if you're writing code, debugging code, or running an application. There are key things that always work the same way (cut/paste for example) but other things that change based on context.

And echoing some of the sentiment I've read here as well, consistency is a bigger win for the end user than form. By that I mean even a crappy UX is okay if it is consistent in how its crappy. Heard a great talk about Nintendo's design of the 'Mario world' games and how the secret sauce was that Mario physics are consistent, so as a game player if you knew how to use the game mechanics to do one thing, you can guess how to use them to do another thing you've not yet done. Similarly with UX, if the mechanics are consistent then they give you a stepping off point for doing a new thing you haven't done but using mechanics you are already familiar with.

[1] Xerox Development Environment -- This was the environment everyone at Xerox Business Systems used when working on the Xerox Star desktop publishing workstation.

NetOpWibby 4 hours ago | parent | prev | next [-]

Fantastic talk, I found myself nodding in agreement a lot. In my research on next-generation desktop interfaces, I was referred to Ink & Switch as well and man, I sure wish they were hiring. I missed out on the Xerox and Bell Labs eras. I'm also reading this book, "Inventing the Future" by John Buck that details early Apple (there's no reason the Jonathan Computer wouldn't sell like hotcakes today, IMHO).

In my downtime I'm working on my future computing concept[1]. The direction I'm going for the UI is context awareness and the desktop being more of an endless canvas. I need to flesh out my ideas into code one of these days.

P.S. Just learned we're on the same Mastodon server, that's dope.

---

[1]: https://systemsoft.works

calmbonsai 5 hours ago | parent | prev | next [-]

I concur though per my earlier post I do feel "desktop stagnation" is inevitable and we're already there. You were channeling Don Norman https://jnd.org/ in the best of ways.

az09mugen 12 hours ago | parent | prev | next [-]

Thanks for that nice talk, it felt like a breeze of fresh air with basic & simple yet powerful but alas "forgotten" concepts of UX.

Will look into your other talks.

averynicepen 2 hours ago | parent | prev | next [-]

This was a really fantastic talk and kept me riveted for 40 minutes. Where can I find more?

agumonkey 2 hours ago | parent | prev | next [-]

where can we find advanced ux labs ? i'm tired of the figma trend

pjmlp 4 hours ago | parent | prev [-]

It was quite interesting.

analogpixel 6 hours ago | parent | prev | next [-]

Why didn't Star Trek ever tackle the big issues, like them constantly updating the LCARS interface every few episodes to make it better, or having Geordi La Forge re-writing the warp core controllers in Rust?

thaumaturgy 6 hours ago | parent | next [-]

Because, something that a lot of tech-obsessed Trek fans never seem to really come to terms with, is that Trek didn't fetishize technology.

In the Trek universe, LCARS wasn't getting continuous UI updates because they would have advanced, culturally, to a point where they recognized that continuous UI updates are frustrating for users. They would have invested the time and research effort required to better understand the right kind of interface for the given devices, and then... just built that. And, sure, it probably would get updates from time to time, but nothing like the way we do things now.

Because the way we do things now is immature. It's driven often by individual developers' needs to leave their fingerprints on something, to be able to say, "this project is now MY project", to be able to use it as a portfolio item that helps them get a bigger paycheck in the future.

Likewise, Geordi was regularly shown to be making constant improvements to the ship's systems. If I remember right, some of his designs were picked up by Starfleet and integrated into other ships. He took risks, too, like experimental propulsion upgrades. But, each time, it was an upgrade in service of better meeting some present or future mission objective. Geordi might have rewritten some software modules in whatever counted as a "language" in that universe at some point, but if he had done so, he would have done extensive testing and tried very hard to do it in a way that wouldn't've disrupted ship operations, and he would only do so if it gained some kind of improvement that directly impacted the success or safety of the whole ship.

Really cool technology is a key component of the Trek universe, but Trek isn't about technology. It's about people. Technology is just a thing that's in the background, and, sometimes, becomes a part of the story -- when it impacts some people in the story.

PunchyHamster 35 minutes ago | parent | next [-]

That's fetishizing Star Trek a bit - they had touch interface for controlling the ship in middle of combat, explosions and everything shaking around which is hardly optimal, both on and off combat (imagine levitating hand across touch panel for hours at end)

cons0le 3 hours ago | parent | prev | next [-]

>Because the way we do things now is immature. It's driven often by individual developers' needs to leave their fingerprints on something, to be able to say, "this project is now MY project", to be able to use it as a portfolio item that helps them get a bigger paycheck in the future.

AKA resume-driven development. I personally know several people working on LLM products, where in private they admit they think LLMs are scams

jfengel 5 hours ago | parent | prev | next [-]

Most of Trek's tech is just a way to move the story along. Transporters were introduced to avoid having to land a shuttle. Warp drive is just a way to get to the next story. Communicators relay plot points.

Stories which focus on them as technology are nearly always boring. "Oh no the transporter broke... Yay we fixed it".

amelius 5 hours ago | parent | prev | next [-]

I still wonder why not everybody was lingering in the holodeck all the time.

(equivalent of people being glued to their smartphones today)

(Related) This is one explanation for the Fermi paradox: Alien species may isolate themselves in virtual worlds

https://en.wikipedia.org/wiki/Fermi_paradox

d3Xt3r 4 hours ago | parent | next [-]

Most likely because this was a star ship (or space station) with a limited number of personnel, all of whom have fixed duties that need to be done. You simply can't afford to waste your time away in holodecks.

The people we saw on screen most of the time also held important positions on the ship (especially the bridge, or engineering) and you can't expect them to just waste significant chunks of time.

Also, don't forget that these people actually like their jobs. They got there because they sincerely wanted to, out of personal interest and drive, and not because of societal pressures like in our present world. They already figured out universal basic income and are living in an advanced self-sufficient society, so they don't even need a job to earn money or live a decent life - these people are doing their jobs because of their pure, raw passion for that field.

RedNifre 4 hours ago | parent | prev [-]

The lack of capitalism meant that the holodeck program authors had no need to optimize their programs for user retention to show them more ads. So much fewer people suffer from holodeck addiction in Star Trek than are glued to their screens in our world.

XorNot 37 minutes ago | parent [-]

Although the funniest thing about the holodeck these days is LLMs have answered a question: can you have realistic non-sentient avatars? Evidently yes, and holodeck authorship is likely a bunch of prompt engineering, with really advanced stuff happening when someone trains a new model or something.

Similarly in Stat Wars with droids: Obi-Wan is right, droids can't think and deserve no real moral consideration because they're just advanced language models in bodies (C3PO insisting on proper protocol because he's a protocol droid is the engineering attempt to keep the LLM on track).

dragonwriter 5 hours ago | parent | prev | next [-]

> In the Trek universe, LCARS wasn't getting continuous UI updates

In the Trek universe, LCARS was continuously generating UI updates for each user, because AI coding had reached the point that it no longer needs specific direction, and it responds autonomously to needs the system itself identifies.

bena 4 hours ago | parent | prev | next [-]

LCARS was technically a self-adapting system that was personalized to a degree per user. So it was continuously updating itself. But in a way to reduce user frustration.

Now, this is really because LCARS is "Stage Direction: Riker hits some buttons and stuff happens".

Mistletoe 5 hours ago | parent | prev | next [-]

Isn't it probably just that they don't really have money in Star Trek so there is no contract promising amazing advances in the LCARS if we just pay this person or company to revamp it? If someone has money to be made from something they will always want to convince you the new thing is what you need.

krapp 5 hours ago | parent [-]

Remember that in Star Trek humans have evolved beyond the desire to work for money or personal gain, so everyone just volunteers their time, and somehow this just always works.

krapp 5 hours ago | parent | prev [-]

>In the Trek universe, LCARS wasn't getting continuous UI updates because they would have advanced, culturally, to a point where they recognized that continuous UI updates are frustrating for users.

Not to be "that guy" but LCARS wasn't getting continuous UI updates because that would have cost the production team money and for TNG at least would have often required rebuilding physical sets. It does get updated between series because as part of setting the design language for that series.

And Geordi was shown constantly making improvements to the ship's systems because he had to be shown "doing engineer stuff."

RedNifre 4 hours ago | parent | prev | next [-]

Because the LCARS GUI is only for simple recurring tasks, so it's easy to find an optimal interface.

Complex tasks are done vibe coding style, like La Forge vibe video editing a recording to find an alien: https://www.youtube.com/watch?v=4Faiu360W7Q

I do wonder if conversational interfaces will put an end to our GUI churn eventually...

PunchyHamster 26 minutes ago | parent [-]

Conversational interfaces are slow and will still be slow even if AI latency will be zero.

It might be nice way for making complex, one off tasks by personnel unfamiliar with all the features of the system, but for fast day to day stuff, button per function will always be a king.

rzerowan 4 hours ago | parent | prev | next [-]

Mostly i believe its that the writers envisioned and were able to wrldbuildinsucha way that the tech was not a subject but was rather a part of the scenery/background with the main object being the people and their relationships. Additionally in some cases where alien tech was interfaced with the characters inthe storysome UI/code rewites were written in, for example in DS9 where the Cardassian interfaces/AI are frustrating to Chief O'Brien and his efforts to remedy/upgrade such gets a recurring role in the story.

Conversly recent versions have taken the view of foregrounding tech aidied with flashy CGI to handwave through a lot.Basically using it as a plot device when the writing is weak.

JuniperMesos 5 hours ago | parent | prev | next [-]

Man, I should hope that the warp core controllers on the USS Enterprise were not written in C.

On the other hand, if the writers of Star Trek The Next Generation were writing the show now, rather than 35-40 years ago - and therefore had a more expansive understanding of computer technology and were writing for an audience that could be relied upon to understand computers better than was actually the case - maybe there would've been more episodes involving dealing with the details of Future Sci-Fi Computer Systems in ways a programmer today might find recognizable.

Heck, maybe this is in fact the case for the recently-written episodes of Star Trek coming out in the past few years (that seem to be much less popular than TNG, probably because the entire media environment around broadcast television has changed drastically since TNG was made). Someone who writes for television today is more likely to have had the experience of taking a Python class in middle school than anyone writing for television decades ago (before Python existed), and maybe something of that experience might make it into an episode of television sci-fi.

As an additional point, my recollection is that the LCARS interface did in fact look slightly different over time - in early TNG seasons it was more orange-y, and in later seasons/Voyager/the TNG movies it generally had more of a purple tinge. Maybe we can attribute this in-universe to a Federation-wide UX redesign (imagine throwing in a scene where Barclay and La Forge are walking down a corridor having a friendly argument about whether the new redesign is better or worse immediately before a Red Alert that starts the main plot of the episode!). From a television production standpoint, we can attribute this to things like "the set designers were actually trying to suggest the passage of time and technology changing in the context of the show", or "the set designers wanted to have fun making a new thing" or "over the period of time that the 80s/90s incarnations of Star Trek were being made, television VFX technology itself was advancing rapidly and people wanted to try out new things that were not previously possible" - all of which have implications for real-world technology as well as fake television sci-fi technology.

bigstrat2003 an hour ago | parent [-]

> recently-written episodes of Star Trek coming out in the past few years (that seem to be much less popular than TNG, probably because the entire media environment around broadcast television has changed drastically since TNG was made)

That's probably part of it. But the larger part is that new Star Trek is very poorly written, so why is anyone going to bother watching it?

Findecanor 5 hours ago | parent | prev | next [-]

I have often thought that Star Trek is supposed to show a future in which computer technology and user interfaces have evolved to a steady state that don't need to change that much, and which is superior to our own in ways that we don't yet understand. And because it hasn't been invented yet, the show does not invent it either.

It is for the audience to imagine that those printed transparencies back-lit with light bulbs behind coloured gel are the most intuitive, easy to use, precise user interfaces that the actors pretend that they are.

calmbonsai 5 hours ago | parent | prev | next [-]

Trek needs to visibly "sci-fi-up" extant tech in order to have the poetic narrative license to tell its present-day parables.

Things just need to "look futuristic". The don't actually need to have practical function outside whatever narrative constraints are imposed in order to provide pace and tension to the story.

I forget who said it first, but "Warp is really the speed of plot".

PunchyHamster 24 minutes ago | parent [-]

Case in point - nobody sensible would put realtime ship controls on a touchscreen if the designed use of it was combat or complex human driven manoeuvrers.

AndrewKemendo 6 hours ago | parent | prev [-]

Because it’s a fantasy space opera show that has nothing to do with reality

Findecanor 2 hours ago | parent | prev | next [-]

I would say that it is the term "UX" that is the confusing part of "UX/UI".

By Don Norman's original definition [0], it is not merely another term for "UI" but specifically when you do have a wider scope and not working with a user interface specifically.

So, the term "UX/UI" would refer to being able to both work with the wider scope, and to go deeper to work with user interface design.

0: https://www.youtube.com/watch?v=9BdtGjoIN4E&t=10s

mattkevan 5 hours ago | parent | prev | next [-]

Really interesting. Going to have to watch in detail.

I’m in the process of designing an os interface that tries to move beyond the current desktop metaphor or the mobile grid of apps.

Instead it’s going to use ‘frames’ of content that are acted on by capabilities that provide functionality. Very much inspired by Newton OS, HyperCard and the early, pre-Web thinking around hypermedia.

A newton-like content soup combined with a persistent LLM intelligence layer, RAG and knowledge graphs could provide a powerful way to create, connect and manage content that breaks out of the standard document model.

__d 22 minutes ago | parent [-]

Is there anything you can share yet? It sounds interesting

xnx 4 hours ago | parent | prev | next [-]

I felt rage baited when he crossed out Jakob Nielsen and promoted Ed Zitron (https://youtu.be/1fZTOjd_bOQt=1852). Bad AI is not good UI, but objecting based on AI being "not ethically trained" and "burning the planet" aren't great reasons.

GaryBluto 4 hours ago | parent [-]

https://www.youtube.com/watch?v=1fZTOjd_bOQ&t=1852s You're missing the ampersand.

It's really strange how he spins off on this mini-rant about AI ethics towards the end. I clicked on a video about UI design.

xnx 4 hours ago | parent [-]

Same. AI is absolutely the future of human computer interaction (exactly the article from Jakob Nielsen that he crossed out). Even the father of WIMP, Douglas Engelbart, thought it was flawed: ""Here's the language they're proposing: You point to something and grunt". AI finally gives us the chance to instruct computers as humans.

SoftTalker 3 hours ago | parent | prev | next [-]

The keyboard and screen UX was established in the 1970s. I've been using a keyboard and screen to work with computers since the 1980s. I am quite sure I'll be using a keyboard and screen until I retire. And probably 50 years from now, we'll still be using keyboards and screens. Some things just work.

Touch screens, voice commands, and other specialized interfaces have and will continue to make sense for some use cases. But for sitting down and working, same as it ever was.

joelkesler 8 hours ago | parent | prev | next [-]

Great talk about the future of desktop user-interfaces.

“…Scott Jenson gives examples of how focusing on UX -- instead of UI -- frees us to think bigger. This is especially true for the desktop, where the user experience has so much potential to grow well beyond its current interaction models. The desktop UX is certainly not dead, and this talk suggests some future directions we could take.”

“Scott Jenson has been a leader in UX design and strategic planning for over 35 years. He was the first member of Apple’s Human Interface group in the late '80s, and has since held key roles at several major tech companies. He served as Director of Product Design for Symbian in London, managed Mobile UX design at Google, and was Creative Director at frog design in San Francisco. He returned to Google to do UX research for Android and is now a UX strategist in the open-source community for Mastodon and Home Assistant.”

christophilus 3 hours ago | parent | prev | next [-]

No, we’re not. Niri + Dank Material Shell is a different and mostly excellent approach.

sprash 5 hours ago | parent | prev | next [-]

Unpopular take: Windows 95 was the peak of Desktop UX.

GUI elements were easily distinguishable from content and there was 100% consistency down to the last little detail (e.g. right click always gave you a meaningful context menu). The innovations after that are tiny in comparison and more opinionated (things like macos making the taskbar obsolete with the introduction of Exposé).

SoftTalker 3 hours ago | parent | next [-]

I would say Windows 2000 Pro, but that really wasn't too different from Windows 95. The OS was much better though, being based on NT.

fragmede 4 hours ago | parent | prev [-]

Heh, the number of points you've probably gotten for that comment, I don't think that it's that unpopular. Win 98 was my jam but it looks hella dated today, but as you said, buttons were clearly marked, but also menus were navigatible via keyboard, soms support for themes and custom coloring, UIs were designable via a GUI builder in VB or Visual Studio using MFC which was very resource friendly compared to using Electron today. Because smartphones and tablets, but even the wide variety of screen sizes also didn't exist so it was a simpler time. I can't believe how much of a step back Electron is for UI creation compared to MFC, but that wasn't cross-platform and usually elements were absolute positioned instead of the relative resizable layout that's required today.

kvemkon 3 hours ago | parent [-]

> buttons were clearly marked

Recently some UI ignored my action by clicking an entry in a list from drop down button. It turned out, this drop down button was additionally a normal button if you press it in the center. Awful.

> UI creation compared to MFC

Here I'd prefer Borland with (Pascal) Delphi / C++ Builder.

> relative resizable layout that's required today.

While it should be beneficial, the reality is awful. E.g. why is the URL input field on [1] so narrow? But if you shrinks the browser window width the text field becomes wide eventually! That's completely against expectations.

[1] https://web.archive.org/save

gherkinnn 3 hours ago | parent | prev | next [-]

What an excellent talk, thank you. Most refreshing of all, it is about UX where the X stands for eXperience, rather than eXploitation.

rolph 7 hours ago | parent | prev | next [-]

problem is with pushing a UX at users and enforcing that model when the user changes it to something comfortable when you should be looking at what the users are throwing away, and what they are replacing it with.

MS is a prime example, dont do what MS has been doing, remember whos hardware it actually is, remain aware that what a developer, and a board room understands as improvement, is not experienced in the same way by average retail consumers.

calmbonsai 5 hours ago | parent | prev | next [-]

For desktops, basically, yes. And that's OK.

Take any other praxis that's reached the 'appliance' stage that you use in your daily life from washing machines, ovens, coffee makers, cars, smartphones, flip-phones, televisions, toilets, vacuums, microwaves, refrigerators, ranges, etc.

It takes ~30 years to optimize the UX to make it "appliance-worthy" and then everything afterwards consists of edge-case features, personalization, or regulatory compliance.

Desktop Computers are no exception.

mrob 4 hours ago | parent | next [-]

I can think of two big improvements to desktop GUIs:

1. Incremental narrowing for all selection tasks like the Helm [0] extension for Emacs.

Whenever there is a list of choices, all choices should be displayed, and this list should be filterable in real time by typing. This should go further than what Helm provides, e.g. you should be able to filter a partially filtered list in a different way. No matter how complex your filtering, all results should appear within 10 ms or so. This should include things like full text search of all local documents on the machine. This will probably require extensive indexing, so it needs to be tightly integrated with all software so the indexes stay in sync with the data.

2. Pervasive support for mouse gestures.

This effectively increases the number of mouse buttons. Some tasks are fastest with keyboard, and some are fastest with mouse, but switching between the two costs time. Increasing the effective number of buttons increases the number of tasks that are fastest with mouse and reduces need for switching.

[0] https://emacs-helm.github.io/helm/

Hammershaft 5 hours ago | parent | prev | next [-]

All of the other examples you gave are products constrained by physical reality with a small set of countable use-cases. I don't think computer operating systems are simply mature appliance-like products that have been optimized down their current design. I think there is a lot of potential that hasn't been realized because the very few players in the operating system space have been been hill-climbing towards a local maxima set by path dependence 40 years ago.

calmbonsai 5 hours ago | parent [-]

To be precise, we're talking about "Desktop Computers" and not the more generic "information appliances".

For example, we're not remotely close to having a standardized "watch form-factor" appliance interface.

Physical reality is always a constraint. In this case, keyboard+display+speaker+mouse+arms-length-proximity+stationary. If you add/remove/alter _any_ of those 6 constraints, then there's plenty of room for innovation, but those constraints _define_ a desktop computer.

pegasus 4 hours ago | parent [-]

That's just the thing, desktops computers have always been in an important way the antithesis of a specialized appliance, a materialization of Turing's dream of the Universal Machine. It's only in recent years that this universality has come under threat, in the name of safety.

danans 3 hours ago | parent | prev [-]

> Take any other praxis that's reached the 'appliance' stage that you use in your daily life from washing machines, ovens, coffee makers, cars ...

I wish the same could be said of car UX these days but clearly that has regressed away from optimal.

bgbntty2 2 hours ago | parent | prev | next [-]

This is a (very) rambling comment since I added things to it as I watched the video.

I think the state of the current Desktop UX is great. Maybe it's a local maximum we've reached, but I love it. I mostly use XFCE and there are just a few small things I'd like changed or fixed. Nothing that I even notice frequently.

I've used tiling window managers before and they were fine, but it was a bit of a hassle to get used to them. And I didn't feel they gave me something I couldn't do with a stacking window manager. I can arrange windows to the sides or corners of the monitor easily with the mouse or the keyboard. On XFCE holding down alt before moving a window lets me select any part of the window, not just the title bar, so it's just "hold down ALT, point somewhere inside the window and flick the window into a corner or a side with the mouse". If I really needed to view 10 windows at the same time, I'd consider a tiling window manager, but virtual desktops on XFCE are enough for me. I have a desktop for my mails, shopping, several for various browsers, several for work, for media, and so on. And I instantly go to the ones I want either with Meta+<number> (for example, Meta+3 for emails), or by scrolling with my middle mouse on the far right on my taskbar where I see a visual representation of my virtual desktops - just white outlines of the windows relative to the monitors.

Another thing I've noticed about desktop UX is that application UX seems follow the trends of website UX where the UX is so dumbed down, even a drunken caveman who's never seen a computer can use it. Tools and options are hidden behind menus. Even the menus are hidden behind a hamburger icon. There's a lot of unnecessary white space everywhere. Sometimes there's even a linear progression through a set of steps, one step at a time, instead of having everything in view all the time - similar to how some registration forms work where you first enter your e-mail, then you click next to enter a password, then click next again, and so on. I always use "compact view" or "details view" where it's possible and hide thumbnails unless I need them. I wish more sites and apps were more like HN in design. If you're looking to convert (into money or into long-term users) as many people as possible, then it might make sense to target the technological toddlers, but then you might lose, or at least annoy, your power users.

At the beginning of the video I thought we'll likely only see foundational changes when we stop interacting with the computer mainly via monitors, keyboards and mice. Maybe when we start plugging USB ports into our heads directly, or something like that. Just like I don't expect any foundational changes or improvements on static books like paper or PDF. Sure, interactive tutorials are fundamentally different in UX, but they're also a fundamentally different medium. But at 28:00, his example of a combination of window manager + file manager + clipboard made me rethink my position. I have used clipboard visualizers long ago, but the integration between apps and being able to drag and otherwise interact with it would be really interesting.

Some more thoughts I jotted down while watching the video:

~~~~ 01:33 This UX of dragging files between windows is new to me. I just grab a file and ALT+TAB to wherever I want to drop it if I can't see it. I think this behavior, to raise windows only on mouse up, will annoy me. What if I have a split view of my file manager in one window, and other window above it? I want to drag a file from the left side of the split-view window to the right one, but the mouse-down wont be enough to show me the right side if the window that was above it covers it. Or if, in the lower window, I want to drag the file into a folder that's also in the lower window, but obscured by the upper window? It may be a specific scenario, but

~~~~ 05:15 I'd forgotten the "What's a computer?" ad. It really grinds my gears when people don't understand that mobile "devices" are computers. I've had non-techies look surprised when I mention it, usually in a sentence like "Well, smartphones are really just computers, so, of course, it should be possible to do X with them.". It's such a basic category.

Similarly, I remember Apple not using the word "tablet" to describe their iPad years ago. Not sure if that has changed. Even many third-party online stores had a separate section for the iPad.

I guess it's good marketing to make people think your product is so unique and different than others. That's why many people reference their iPhone as "my iPhone" instead of "my phone" or "my smartphone". People usually don't say "my Samsung" or "my $brand" for other brands, unless they want to specify it for clarity. Great marketing to make people do this.

~~~~ 24:50 I'm a bit surprised that someone acknowledges that the UX for typing and editing on mobile is awful. But I think that no matter how many improvements happen, using a keyboard will always be much, much faster and pleasant. It's interesting to me that even programmers or other people who've used desktop professionally for years don't know basic things like SHIFT+left_arrow or SHIFT+right_arrow to select, or CTRL+left_arrow or CTRL+right_arrow to move between words, or combining them to select words - CTRL+SHIFT+left_arrow or CTRL+SHIFT+right_arrow. Or that they can hold their mouse button after double clicking on a word and move it around to select several words. Watching them try to select some text in a normal app (such as HN's comment field or a standard notepad app) using only arrow keys without modifiers or tapping the backspace 30 times (not even holding it down) or trying to precisely select the word boundary with a mouse... it's like watching someone right-click and then select "Paste" instead of CTRL+V. I guess some users just don't learn. Maybe they don't care or are preoccupied with more important things, but it's weird to me. But, on the other hand, I never learned vi/vim or Emacs to the point where it would make me X times more productive. So maybe what those users above look to me is what I look to someone well-versed in either of those tools.

~~~~ Forgot the timestamp, it was near the end, but the projects Ink & Switch make seem interesting. Looking at their site now.

fortyseven 7 hours ago | parent | prev | next [-]

You know, sometimes things just work. They get whittled way at until we end up with a very refined endpoint. Just look at cell phones. Black rectangles as far as the eye can see. For good reason. I'm not saying don't explore new avenues ( foldables, etc. ), but it's perfectly fine to come to settle into a metaphor that just works.

7thaccount 6 hours ago | parent [-]

The Windows 95-XP taskbar is good. Everything else has been downhill.

pdonis 6 hours ago | parent [-]

I use Trinity Desktop on Linux because it's basically the same as the Windows 95-XP taskbar interface, and has no plans to change.

DonHopkins 4 hours ago | parent | prev | next [-]

Golan Levin quotes Joy Mountford in his "TED Talk, 2009: Art that looks back at you":

>A lot of my work is about trying to get away from this. This a photograph of the desktop of a student of mine. And when I say desktop, I don't just mean the actual desk where his mouse has worn away the surface of the desk. If you look carefully, you can even see a hint of the Apple menu, up here in the upper left, where the virtual world has literally punched through to the physical. So this is, as Joy Mountford once said, "The mouse is probably the narrowest straw you could try to suck all of human expression through." (Laughter)

https://flong.com/archive/texts/lectures/lecture_ted_09/inde...

https://en.wikipedia.org/wiki/Golan_Levin

https://www.flong.com/

https://en.wikipedia.org/wiki/Joy_Mountford

https://www.joymountford.com/

immibis 4 hours ago | parent | prev | next [-]

I don't want to see what any of today's companies would come up with to replace the desktop. Microsoft has tried a few times and they all sucked.

AndrewKemendo 6 hours ago | parent | prev | next [-]

The computer form factor hasn’t changed since the mainframe: look into a screen for where to give input, select visual icons via a pointer, type text via keyboard into a text entry box, hit an action button, recieve result, repeat

it’s just all gotten miniaturized

Humans have outright rejected all other possible computer form factors presented to them to date including:

Purely NLP with no screen

head worn augmented reality

contact lenses,

head worn virtual reality

implanted touch sensors

etc…

Every other possible form factor gets shit on, on this website and in every other technology newspaper.

This is despite almost a century of a attempts at doing all those and making zero progress in sustained consumer penetration.

Had people liked those form factors they would’ve been invested in them early on, such that they would develop the same way the laptops and iPads and iPhones and desktops have evolved.

However nobody’s even interested at any type of scale in the early days of AR for example.

I have a litany of augmented and virtual reality devices scattered around my home and work that are incredibly compelling technology - but are totally seen as straight up dogshit from the consumer perspective.

Like everything it’s not a machine problem, it’s a human people in society problem

nkrisc 5 hours ago | parent | next [-]

> Purely NLP with no screen

Cumbersome and slow with horrible failure recovery. Great if it works, huge pain in the ass if it doesn't. Useless for any visual task.

> head worn augmented reality

Completely useless if what you're doing doesn't involve "augmenting reality" (editing a text document), which probably describes most tasks that the average person is using a computer for.

> contact lenses

Effectively impossible to use for some portion of the population.

> head worn virtual reality

Completely isolates you from your surroundings (most people don't like that) and difficult to use for people who wear glasses. Nevermind that currently they're heavy, expensive, and not particularly portable.

> implanted sensors

That's going to be a very hard sell for the vast majority of people. Also pretty useless for what most people want to do with computers.

The reason these different form factors haven't caught on is because they're pretty shit right now and not even useful to most people.

The standard desktop environment isn't perfect, but it's good and versatile enough for what most people need to do with a computer.

AndrewKemendo 5 hours ago | parent [-]

And most computers were entirely shit in the 1950s

yet here we are today

You must’ve missed the point: people invested in desktop computers when they were shitty vacuum tubes that blow up.

That still hasn’t happened for any other user experience or interface.

> it's good and versatile enough for what most people need to do with a computer

Exactly correct! Like I said it’s a limitation of the human society, the capabilities and expectations of regular people are so low and diffuse that there is not enough collective intelligence to manage a complex interface that would measurably improve your abilities.

Said another way, it’s the same as if a baby could never “graduate” from Duplo blocks to Lego because lego blocks are too complicated

mcswell 5 hours ago | parent | prev | next [-]

Since mainframes, you say. Well, sonny, when I first learned programming on a mainframe, we had punch cards and fan-fold printouts. Nothing beats that, eh?

immibis 4 hours ago | parent | prev | next [-]

Phone UIs are still screen UIs, but they are not desktop UIs, and that's not because of the shape of the device.

AndrewKemendo 4 hours ago | parent [-]

Tell me how that’s not a phone and a desktop:

https://www.instagram.com/reel/DPtvpkSExfA/

albumen 3 hours ago | parent [-]

That's not a phone and a desktop. I feel like I'm stating the obvious here; it's too big to be a phone, for any reasonable definition of 'phone'.

AnimalMuppet 5 hours ago | parent | prev [-]

I do not see laptop computers as the same form factor as mainframes. At. All.

Even more, I don't see phones as the same form factor as mainframes.

ares623 6 hours ago | parent | prev | next [-]

Are we stuck with the same toothbrush UX forever?

esafak 4 hours ago | parent | next [-]

There are electric-, ultrasonic-, mouthpiece-, and irrigating toothbrushes...

Maybe the experience has not changed for the average person, but alternatives are out there.

calmbonsai 5 hours ago | parent | prev | next [-]

I can imagine some sort of car-wash-like partial mouth insertion interface (think "smart cleaner/retainer"), but it would be cost-prohibitive and, likely, not offer any appreciable cleaning benefits.

LeFantome 6 hours ago | parent | prev | next [-]

I feel like toothbrush UX has improved quite a bit.

yearolinuxdsktp 5 hours ago | parent | next [-]

It’s changed, but is a wash:

On the positive side, my electronic toothbrush allows me to avoid excessive pressure via real-time green/red light.

On the negative side, it guilt trips me with a sad face emoji any time my brushing time is under 2 minutes.

AndrewKemendo 6 hours ago | parent | prev [-]

Toothbrush UX is the same today as it was when we were hunter gatherers: use an abrasive tool to ablate plaque from the teeth and gums without removing enamel

https://www.youtube.com/watch?v=zMuTG6fOMCg

The variety of form factors offered are the only difference

mrob 4 hours ago | parent | next [-]

As somebody who's tried using a miswak [0] teeth-cleaning twig out of curiosity, I can say with confidence it's not the same experience as using a modern toothbrush. It's capable of cleaning your teeth effectively, but it's slower and more difficult than a modern toothbrush. The angle of the bristles makes a huge difference. When the bristles face forward like with a teeth-cleaning twig your lips get in the the way a lot more. Sideways bristles are easier to use.

[0] https://en.wikipedia.org/wiki/Miswak

jrowen 5 hours ago | parent | prev [-]

Yes, whittling down a stick is pretty much the same experience as using an electric toothbrush. Or those weird mouthguard things they have now.

I don't think most people would find this degree of reduction helpful.

AndrewKemendo 5 hours ago | parent [-]

> Yes, whittling down a stick is pretty much the same experience as using an electric toothbrush

Correct? I agree with this precisely but assume you’re writing it sarcastically

From the point of view of the starting state of the mouth to the end state of the mouth the USER EXPERIENCE is the same: clean teeth

The FORM FACTOR is different: Electric version means ONLY that I don’t move my arm

“Most people” can’t do multiplication in their head so I’m not looking to them to understand

echoangle 5 hours ago | parent [-]

That’s just not what user experience means, two products having the same start and end state doesn’t mean the user experience is the same. Imagine two tools, one a CLI and one a GUI, which both let you do the same thing. Would you say that they by definition have the same user experience?

AndrewKemendo 5 hours ago | parent [-]

If you drew both brushing processes as a UML diagram the variance would be trivial

Now compare that variance to the variance options given with machine and computing UX options

you’ll see clearly that one (toothbrushing) is less than one stdev different in steps and components for the median use case and one (computing) is nearly infinite variance (no stable stdev) between median use case steps and components.

The fact that the latter state space manifold is available but the action space is constrained inside a local minima is an indictment on the capacity for action space traversal by humans.

This is reflected again with what is a point action space (physically ablate plaque with abrasive) in the possible state space of teeth cleaning for example: chemical only/non ablative, replace teeth entirely every month, remove teeth and eat paste, etc…

So yes I collapsed that complexity into calling it “UX” which classically can be described via UML

jrowen 2 hours ago | parent [-]

I would almost define "experience" as that which can't be described by UML.

Ask any person to go and find a stick and use it to brush their teeth, and then ask if that "experience" was the same as using their toothbrush. Invoking UML is absurd.

ErroneousBosh 4 hours ago | parent | prev [-]

I was going to say "are we stuck with the same bicycle UX forever".

Because we've been stuck with the same bicycle UX for like 150 years now.

Sometimes shit just works right, just about straight out of the gate.

DangitBobby an hour ago | parent | next [-]

There have been absolute fucking gobs of UX changes to bikes in just the last 5 years. They just usually end up on mid range or higher end bikes. Obviously they don't fundamentally change the way a bike works, otherwise it wouldn't be a bike anymore.

esafak 4 hours ago | parent | prev [-]

This is what bicycles originally looked like: https://en.wikipedia.org/wiki/Velocipede#/media/File:Velocip...

ErroneousBosh 3 hours ago | parent [-]

Yes, something like 200 years ago.

By the 1870s we'd pretty much standardised on the "Safety Bicycle", which had a couple of smallish wheels about two and a half feet in olden days measurements in diameter, with a chain drive from a set of pedals mounted low in the frame to the rear wheel.

By the end of the 1880s, you had companies mass-producing bikes that wouldn't look unreasonable today. All we've done since is make them out of lighter metal, improve the brakes from pull rods to cables to hydraulic discs brakes, and give them more gears (it wouldn't be until the early 1900s that the first hub gears became available, with - perhaps surprisingly - derailleurs only coming along 100 years ago).

https://en.wikipedia.org/wiki/Safety_bicycle

eek2121 5 hours ago | parent | prev | next [-]

For the same reason we don't reinvent the wheel. Or perhaps, the same reason we don't constantly change things like a vehicle. It works well, and introducing something new means a learning curve that 99% of folks won't want to deal with, so at that point, you are designing something new for the other 1% of folks willing to tackle it. Unless it's an amazing concept, it won't take off.

ErroneousBosh 4 hours ago | parent [-]

> Or perhaps, the same reason we don't constantly change things like a vehicle.

Are we stuck with the same brake pedal UX forever?

johnea 3 hours ago | parent | prev | next [-]

A) I'm not going to watch the video because it's hosted by goggle, and I'm not interested in being goggled.

B) However, even without watching the video, it must be describing corporate product UI, because in the free software world, there is a huge variety of selections for desktop (and phone) UI choices.

C) The big question I continue to come back to in HN comments: why does any technically astute person continue to run these monopolistic, and therefore beige, boring, bland, corporate UIs?

You can have free software with free choice, or you can have whatever goggle tells you...

virtualbluesky an hour ago | parent [-]

Do you have suggestions for those less informed about projects that are pushing the envelope on desktop UX?

migueldeicaza 4 hours ago | parent | prev | next [-]

Scrubbed the talk, saw “M$” in a slide, flipped the bozo bit

whatever1 5 hours ago | parent | prev [-]

Desktop is dead. Gamers will move to consoles and Valve-like platforms. Rest of productivity is done on a single window browser anyway. Llms will accelerate this

Coders are the only ones who still should be interested in desktop UX, but even in that segment many just need a terminal window.

linguae 4 hours ago | parent | next [-]

Is it dead because people don’t want the desktop, or is it dead because Big Tech won’t invest in the desktop beyond what’s necessary for their business?

Whether intentional or not, it seems like the trend is increasingly locked-down devices running locked-down software, and I’m also disturbed by the prospect of Big Tech gobbling up hardware (see the RAM shortage, for example), making it unaffordable for regular people, and then renting this hardware back to us in the form of cloud services.

It’s disturbing and I wish we could stop this.

PunchyHamster 23 minutes ago | parent | next [-]

MS invests in actively making desktop experience.

But outside of that I doubt there will be many users actually doing stuff (as opposed to just ingesting content) that will abandon desktop, and other ones like Mac UI isn't getting worse

xnx 4 hours ago | parent | prev [-]

Desktop is all about collaboration and interaction with other apps. The ideal of every contemporary SaaS is that you can never download your "files" so you stay locked in.

vjvjvjvjghv 3 hours ago | parent [-]

Exactly. Interoperability is not cool anymore. You need to lock users in

shmerl an hour ago | parent | prev | next [-]

No, thanks. I'm a gamer but I don't need a console like UX as the only option.

snovv_crash 4 hours ago | parent | prev | next [-]

For content consumption sure.

For content creation though, desktop still rules.

immibis 4 hours ago | parent [-]

Sounds like a dead market. Nobody needs to create content any more now that we have AI.

sprash 4 hours ago | parent | prev | next [-]

It's not dead. It's being murdered. Microsoft, Apple, Gnome and KDE are making the experience worse with each update. Productive work becomes a chore. And the last thing we need is more experiments. We need more performance, responsiveness, consistency and less latency. Everything got worse on all 4 points for every desktop environment despite hardware getting faster by several orders of magnitude.

This also means that I heavily disagree with one of the points of the presenter. We should not use the next gen hardware to develop for the future Desktop. This is the most nonsensical thing I heard all day. We need to focus on the basics.

vortext 3 hours ago | parent | next [-]

KDE? It has great performance, it's highly configurable, and it's been improving. Many people don't seem to like GNOME 3, but it has also been getting better, in my view. I agree Windows and macOS have been getting worse.

silisili 4 hours ago | parent | prev | next [-]

I agree with this. I remember when Gnome 3 came out, there were a lot of legitimate complaints that were handwaved away by the developers as "doesn't work well on a mobile interface", despite Gnome having approximately zero install cases onto anything mobile. AFAICT that probably hasn't changed, all these years later.

WD-42 2 hours ago | parent [-]

I don’t know. I just started distributing a gtk app and I’ve already gotten two issue reports from people using it on mobile experiencing usability problems. Not something I thought I’d have to worry about when I started but I guess they are out there.

sho_hn 2 hours ago | parent | prev | next [-]

FWIW, this just isn't true for KDE. We hit a rough patch with the KDE 4.x series - 17 years ago - that has been difficult to live down, but have done much in the way of making amends since, including learning from and avoiding the mistakes we made back then.

For example, we intentionally optimized Plasma 5 for low-powered devices (we used to have stacks of the Pinebook at dev sprints, essentially a RaspPi-class board in a laptop shell), shedding more than half the menory and compute requirements in just that generational advance.

We also have a good half-decade of QA focus behind us, including community-elected goals like a consistency campaign, much like what you asked for.

I'm confident Plasma 5 and 6 have iteratively gotten better on all four points.

It's certainly not perfect yet, and we have many areas to still improve about the product, some of them greatly. But we're certainly not enshittifying, and the momentum remains very high. Nearly all modern, popular new distros default to KDE (e.g. Bazzite, CachyOS, Asahi, Valve SteamOS) and our donation totals from low-paying individual donors - a decent proxy for user satisfaction - have multiplied. I've been around the commnunity for about 20 to 25 years and it's never been a more vibrant project than today.

Re the fantastic talk, thanks for the little KDE shout-out in the first two minutes!

kvemkon 3 hours ago | parent | prev [-]

> Gnome

I can't imagine what I'd be doing without MATE (GNOME 2 fork ported to GTK+ 3).

Recently I've stumbled upon:

> I suspect that distro maintainers may feel we've lost too many team members so are going with an older known quantity. [1]

This sounds disturbing.

[1] https://github.com/mate-desktop/caja/issues/1863#issuecommen...

hollerith 4 hours ago | parent | prev [-]

>productivity is done on a single window browser anyway

When I need to get productive, sometimes I disable the browser to stop myself from wasting time on the web.

whatever1 4 hours ago | parent [-]

And you likely open the browser that happens to be called VS Code, Figma etc

hollerith 4 hours ago | parent [-]

The point though is that my vscode window does not have an address bar I can use to visit Youtube or Pornhub at any time.

I guess the larger point is that you need a desktop to run vscode or Figma, so the desktop is not dead.