Remix.run Logo
Closi 8 hours ago

I think you are right in the 'current paradigm' of what software is at the moment, where users are using a fixed set of functionality in the way that the developer intended, but there is a new breed of software where the functionality set can't be defined in an exhaustive way.

Take Claude Code - after I've described my requirement it gives me a customised UI that asks me to make choices specific to what I have asked it to build (usually a series of dropdown lists of 3-4 options). How would a static UI do that in a way that was as seamless?

The example used in the article is a bit more specific but fair - if you want to calculate the financial implications of a house purchase in the 'old software paradigm' you probably have to start by learning excel and building a spreadsheet (or using a dodgy online calculator someone else built, which doesn't match your use case). The spreadsheet the average user writes might be a little simplified - are we positive that they included stamp duty and got the compounding interest right? Wouldn't it be great if Excel could just give you a perfectly personalised calculator, with toggle switches, without users needing to learn =P(1+(k/m))^(mn) but while still clearly showing how everything is calculated? Maybe Excel doesn't need to be a tool which is scary - it can be something everyone can use to help make better decisions regardless of skill level.

So yes, if you think of software only doing what it has done in the past, Gen UI does not make sense. If you think of software doing things it has never done before we need to think of new interaction modes (because hopefully we can do something better than just a text chat interface?).

mx7zysuj4xew 5 hours ago | parent [-]

Cute, but your whole premise relies on knowing the right questions to ask, which you don't. We just had an entire decade of good interfaces being ruined by poorly conceived anemic "user stories" we don't need to further destroy our HCI for the next century or so

Closi 2 hours ago | parent | next [-]

I would argue the opposite - the premise actually accepts that software developers and product owners can't always know how their software will be used by end users.

Besides, HCI will inevitably change because after 30 years of incremental user interface refinement, your average person still struggles to use Excel or Photoshop, but those same users are asking ChatGPT to help them write formulas or asking Gemini to help edit their photos.

I don't accept the premise that the interfaces were ever actually that good - For simple apps users can get around them fine, but for anything moderately complex users have mostly struggled or need training IMO. Blender as an example is an amazing piece of software - but ask any user who has never used it before to draw and render a bird without refering to the documentation (they won't be able to). If we want users to be able to use software like blender without needing to formally train them then we need a totally different approach (which would be great, as I suspect artistic ability and the technical ability to use blender are not necessarily correlated that strongly).

skeledrew 2 hours ago | parent | prev [-]

The right questions aren't always known up front. Some of the reasoning for using AI in the first place is to refine a fuzzy idea, so a tool like this can help one to go from fuzzy to concrete, with good guardrails in place to ensure the concrete is truly solid. The in this case, the point is that the components of good user interfaces are already available, and then composed based on user prompts to their exact specifications and frozen for normal usage. Unfreeze and prompt again later to tweak further, etc.