Remix.run Logo
FfejL 2 days ago

> It's not that Dell doesn't care about AI or AI PCs anymore, it's just that over the past year or so it's come to realise that the consumer doesn't.

I wish every consumer product leader would figure this out.

ericmcer 2 days ago | parent | next [-]

People will want what LLMs can do they just don't want "AI". I think having it pervade products in a much more subtle way is the future though.

For example, if you close a youtube browser tab with a comment half written it will pop up an `alert("You will lose your comment if you close this window")`. It does this if the comment is a 2 page essay or "asdfasdf". Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input. That is really difficult to do in traditional software but is something an LLM could do with low effort. The end result is I only have to deal with that annoying popup when I really am glad it is there.

That is a trivial example but you can imagine how a locally run LLM that was just part of the SDK/API developers could leverage would lead to better UI/UX. For now everyone is making the LLM the product, but once we start building products with an LLM as a background tool it will be great.

It is actually a really weird time, my whole career we wanted to obfuscate implementation and present a clean UI to end users, we want them peaking behind the curtain as little as possible. Now everything is like "This is built with AI! This uses AI!".

wrl a day ago | parent | next [-]

> Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input. That is really difficult to do in traditional software but is something an LLM could do with low effort.

I read this post yesterday and this specific example kept coming back to me because something about it just didn't sit right. And I finally figured it out: Glancing at the alert box (or the browser-provided "do you want to navigate away from this page" modal) and considering the text that I had entered takes... less than 5 seconds.

Sure, 5 seconds here and there adds up over the course of a day, but I really feel like this example is grasping at straws.

FridgeSeal a day ago | parent | next [-]

It’s also trivially solvable with idk, a length check, or any number of other things which don’t need to 100b parameters to calculate.

zdragnar a day ago | parent [-]

This was a problem at my last job. Boss kept suggesting shoving AI into features, and I kept pointing out we could make the features better with less effort using simple heuristics in a few lines of code, and skip adding AI altogether.

So much of it nowadays is like the blockchain craze, trying to use it as a solution for every problem until it sticks.

andrekandre a day ago | parent [-]

  > Boss kept suggesting shoving AI into features, and I kept pointing out we could make the features better with less effort using simple heuristics in a few lines of code
depending on what it is, it would probably also cost less money (no paying for token usage), use less electricity and be more reliable (less probabilistic, more deterministic), and easier to maintain (just fix the bug in the code vs prompt/input spelunking) as well.

there are definitely useful applications for end user features, but a lot of this is ordered from on-high top-down and product managers need to appease them...

zdragnar a day ago | parent [-]

... And the people at the top are only asking for it because it sounds really good to investors and shareholders. "Powered by AI" sounds way fancier and harder to replace than "power by simple string searches and other heuristics"

johnnyanmac a day ago | parent | prev | next [-]

A rarer-ish chance to use this XKCD: https://xkcd.com/1205/

I'd put this in "save 5 seconds daily" to be generous. Remember that this is time saved over 5 years.

9rx a day ago | parent | prev [-]

The problem isn't so much the five seconds, it is the muscle memory. You become accustomed to blindly hitting "Yes" every time you've accidentally typed something into the text box, and then that time when you actually put a lot of effort into something... Boom. Its gone. I have been bitten before. Something like the parent described would be a huge improvement.

Granted, it seems the even better UX is to save what the user inputs and let them recover if they lost something important. That would also help for other things, like crashes, which have also burned me in the past. But tradeoffs, as always.

fckgw a day ago | parent | next [-]

Which is fine! That's me making the explicit choice that yes, I want to close this box and yes, I want to lose this data. I don't need an AI evaluating how important it thinks I am and second guessing my judgement call.

I tell the computer what to do, not the other way around.

9rx a day ago | parent [-]

You do, however, need to be able to tell the computer that you want to opt in (or out, I suppose) of being able to using AI to evaluate how important it thinks your work is. If you don’t have that option, it is, in fact, the computer telling you what to do. And why would you want the computer to tell you what to do?

addaon a day ago | parent | prev | next [-]

> You become accustomed to blindly hitting "Yes" every time you've accidentally typed something into the text box, and then that time when you actually put a lot of effort into something... Boom. Its gone.

Wouldn't you just hit undo? Yeah, it's a bit obnoxious that Chrome for example uses cmd-shift-T to undo in this case instead of the application-wide undo stack, but I feel like the focus for improving software resilience to user error should continue to be on increasing the power of the undo stack (like it's been for more than 30 years so far), not trying to optimize what gets put in the undo stack in the first place.

poopooracoocoo a day ago | parent | next [-]

Now y'all are just analysing the UX of YouTube and Chrome.

The problem is that by agreeing to close the tab, you're agreeing to discard the comment. There's currently no way to bring it back. There's no way to undo.

AI can't fix that. There is Microsoft's "snapshot" thing but it's really just a waste of storage space.

johnnyanmac a day ago | parent [-]

I mean, it can. But so can a task runner that periodically saves writing to a clipboard history. The value is questionable, but throwing an LLM at it does feel overkill on terms of overhead.

9rx a day ago | parent | prev [-]

> Wouldn't you just hit undo?

Because:

1. Undo is usually treated as an application-level concern, meaning that once the application has exited there is no specific undo, as it is normally though of, function available. The 'desktop environment' integration necessary for this isn't commonly found.

2. Even if the application is still running, it only helps if the browser has implemented it. You mention Chrome has it, which is good, but Chrome is pretty lousy about just about everything else, so... Pick your poison, I guess.

3. This was already mentioned as the better user experience anyway, albeit left open-ended for designers, so it is not exactly clear what you are trying to add. Did you randomly stop reading in the middle?

officeplant a day ago | parent | prev | next [-]

>You become accustomed to blindly hitting "Yes" every time you've accidentally typed something into the text box, and then that time when you actually put a lot of effort into something... Boom. Its gone.

I'm not sure we need even local AI's reading everything we do for what amounts to a skill issue.

9rx a day ago | parent [-]

You're quite right that those with skills have no need for computers, but for the rest of us there is no need for them to not have a good user experience.

pavel_lishin a day ago | parent | prev | next [-]

I have the exact opposite muscle memory.

th0ma5 a day ago | parent | prev [-]

I think this is covered in the Bainbridge automation paper https://en.wikipedia.org/wiki/Ironies_of_Automation ... When the user doesn't have practiced context like you described, to be expected to suddenly have that practiced context to do the right thing in a surprise moment is untenable.

mossTechnician 2 days ago | parent | prev | next [-]

> if you close a youtube browser tab with a comment half written it will pop up an `alert("You will lose your comment if you close this window")`. It does this if the comment is a 2 page essay or "asdfasdf". Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input. That is really difficult to do in traditional software but is something an LLM could do with low effort.

I don't think that's a great example, because you can evaluate the length of the content of a text box with a one-line "if" statement. You could even expand it to check for how long you've been writing, and cache the contents of the box with a couple more lines of code.

An LLM, by contrast, requires a significant amount of disk space and processing power for this task, and it would be unpredictable and difficult to debug, even if we could define a threshold for "important"!

mort96 2 days ago | parent | next [-]

I think it's an excellent example to be honest. Most of the time whenever someone proposes some use case for a large language model that's not just being a chat bot, it's either a bad idea, or a decent idea that you'd do much better with something much less fancy (like this, where you'd obviously prefer some length threshold) than with a large language model. It's wild how often I've heard people say "we should have an AI do X" when X is something that's very obviously either a terrible idea or best suited for traditional algorithms.

Sort of like how most of the time when people proposed a non-cryptocurrency use for "blockchain", they had either re-invented Git or re-invented the database. The similarity to how people treat "AI" is uncanny.

QuantumNomad_ 2 days ago | parent [-]

> It's wild how often I've heard people say "we should have an AI do X" when X is something that's very obviously either a terrible idea or best suited for traditional algorithms.

Likewise when smartphones were new, everyone and their mother was certain that random niche thing that made no sense as an app would be a perfect app and that if they could just get someone to make the app they’d be rich. (And of course ideally, the idea haver of the misguided idea would get the lions share of the riches, and the programmer would get a slice of pizza and perhaps a percentage or two of ownership if the idea haver was extra generous.)

fragmede 2 days ago | parent [-]

With Claude Code doing the implementing now, we'll have to see who gets which slice of pizza!

reactordev a day ago | parent [-]

Difference is now, that person with an idea, doesn’t need a programmer or anyone to share the pizza with. They are free to gorge on all 18” of it.

johnnyanmac a day ago | parent [-]

Well, until the other 10 people with that idea get a slice in. Likely speaking that 2 people get 7 slices of the 8 slice pizza, and the other 8 people fight over the last piece.

mandevil 2 days ago | parent | prev [-]

The difference between "AI" and "linear regression" is whether you are talking to a VC or an engineer.

wavemode 2 days ago | parent | prev | next [-]

> Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input.

That doesn't sound ideal at all. And in fact highlights what's wrong with AI product development nowadays.

AI as a tool is wildly popular. Almost everyone in the world uses ChatGPT or knows someone who does. Here's the thing about tools - you use them in a predictable way and they give you a predictable result. I ask a question, I get an answer. The thing doesn't randomly interject when I'm doing other things and I asked it nothing. I swing a hammer, it drives a nail. The hammer doesn't decide that the thing it's swinging at is vaguely thumb-shaped and self-destruct.

Too many product managers nowadays want AI to not just be a tool, they want it to be magic. But magic is distracting, and unpredictable, and frequently gets things wrong because it doesn't understand the human's intent. That's why people mostly find AI integrations confusing and aggravating, despite the popularity of AI-as-a-tool.

wredcoll 2 days ago | parent | next [-]

> The hammer doesn't decide that the thing it's swinging at is vaguely thumb-shaped and self-destruct.

Sawstop literally patented this and made millions and seems to have genuinely improved the world.

I personally am a big fan of tools that make it hard to mangle my body parts.

wavemode 2 days ago | parent [-]

sawstop is not AI

wredcoll 2 days ago | parent | next [-]

Sure, where's the line?

If you want to tell me that llms are inherently non-deterministic, then sure, but from the point of view of a user, a saw stop activating because the wood is wet is really not expected either.

ori_b a day ago | parent | next [-]

Yes, cutting wet wood on the sawstop sucks, but I put up with it. If clicking 'close' on the wrong tab amputated a finger, I'd also put up with it. However, I've closed plenty of tabs accidentally, and all my fingers are still attached.

wavemode 2 days ago | parent | prev | next [-]

Mm yeah, I see the point you're making.

(Though, of course, there certainly are people who dislike sawstop for that sort of reason, as well.)

GuinansEyebrows a day ago | parent | prev | next [-]

also from the point of view from a user: in this example, while frustrating/possibly costly, a false positive is infinitely preferable to a false negative.

a day ago | parent | prev [-]
[deleted]
ChoGGi a day ago | parent | prev [-]

I mean, I wouldn't want sawstop to hallucinate my finger is a piece of wood.

ericmcer a day ago | parent | prev | next [-]

But... A lot of stuff you rely on now was probably once distracting and unpredictable. There are a ton of subtle UX behaviors a modern computer is doing that you don't notice, but if they all disappeared and you had to use windows 95 for a week you would miss.

That is more what I am advocating for, subtle background UX improvements based on an LLMs ability to interpret a users intent. We had limited abilities to look at an applications state and try to determine a users intent, but it is easier to do that with an LLM. Yeah like you point out some users don't want you to try and predict their intent, but if you can do it accurately a high percentage of the time it is "magic".

abanana a day ago | parent | next [-]

> subtle UX behaviors

I'd wager it's more likely to be the opposite.

Older UIs were built on solid research. They had a ton of subtle UX behaviors that users didn't notice were there, but helped in minor ways. Modern UIs have a tendency to throw out previous learning and to be fashion-first. I've seen this talked about on HN a fair bit lately.

Using an old-fashioned interface, with 3D buttons to make interactive elements clear, and with instant feedback, can be a nicer experience than having to work with the lack of clarity, and relative laggyness, of some of today's interfaces.

ori_b a day ago | parent [-]

> Older UIs were built on solid research. They had a ton of subtle UX behaviors that users didn't notice were there, but helped in minor ways. Modern UIs have a tendency to throw out previous learning and to be fashion-first.

Yes. For example, Chrome literally just broke middle-click paste in this box when I was responding. It sets the primary selection to copy, but fails to use it when pasting.

Middle click to open in new tab is also reliably flaky.

I really miss the UI consistency of the 90s and early 2000s.

mjfisher a day ago | parent | prev | next [-]

Serious question: what are those things from windows 95/98 I might miss?

Rose tinted glasses perhaps, but I remember it as a very straightforward and consistent UI that provided great feedback, was snappy and did everything I needed. Up to and including little hints for power users like underlining shortcut letters for the & key.

johnnyanmac a day ago | parent | next [-]

I miss my search bar actually being a dumb grep of my indexed files. It's still frustrating typing 3 characters, seeing the result pop up in the 2nd key stroke, but having it transform into something else by the time I process the result.

optimalquiet a day ago | parent [-]

Inevitably windows search fails to highlight what I’m looking for almost all of the time, and often doesn’t even find it at all. If I have an application installed, it picks the installer in the downloads folder. If I don’t have an app installed, it searches Bing for it. Sometimes it even searches when I do have the application installed!

Microsoft seems not to believe that users want to use search primarily as an application launcher, which is strange because Mac, Linux, and mobile have all converged on it.

eterm a day ago | parent | prev [-]

The only one I can think of, literally the only one, is grouped icons.

And even that's only because browsers ended up in a weird "windows but tabs but actually tabs are windows" state.

So yeah, I'd miss the UX of dragging tabs into their own separate windows.

But even that is something that still feels janky in most apps ( windows terminal somehow makes this feel bad, even VS code took a long time to make it feel okay ), and I wouldn't really miss it that much if there were no tabs at all and every tab was forced into a separate window at all times with it's own task bar entry.

tliltocatl a day ago | parent [-]

It's not like grouped icons wasn't technically infeasible on win95. And honestly, whatever they are more useful is quite debatable. And personally, I don't even have a task panel anymore.

The real stuff not on Win95 that everyone would miss is scalable interfaces/high DPI (not necessary as in HiDPI, just above 640x480). And this one does require A LOT of resources and is still wobbly.

eterm a day ago | parent [-]

I'm not sure what you mean by "Technically feasible", but it wasn't supported by explorer.

You could have multiple windows, and you could have MDI windows, but you couldn't have shared task bar icons that expand on hover to let you choose which one to go to.

If you mean that someone could write a replacement shell that did that, then maybe, but at that point it's no longer really windows 95.

ori_b a day ago | parent | prev | next [-]

I remember seeing one of those "kids use old technology" videos, where kids are confused by rotary phones and the like.

One of the episodes had them using Windows 98. As I recall, the reaction was more or less "this is pretty ok, actually". A few WTFs about dialup modems and such, but I don't recall complaints about the UI.

marcosdumay a day ago | parent | prev | next [-]

> But... A lot of stuff you rely on now was probably once distracting and unpredictable.

And nobody relied on them when they were distracting and unpredictable. People only rely on them now because they are not.

LLMs won't ever be predictable. They are designed not to be. A predictable AI is something different from a LLM.

nottorp a day ago | parent | prev [-]

> There are a ton of subtle UX behaviors a modern computer is doing that you don't notice, but if they all disappeared and you had to use windows 95 for a week you would miss.

Like what? All those popups screaming that my PC is unprotected because I turned off windows firewall?

bluGill 2 days ago | parent | prev [-]

I want magic that works. Sometimes I want a tool to interrupt me! I know my route to work so I'm not going to ask how I should get there today - but 1% of the time there is something wrong with my plan (accident, construction...) and I want the tool to say something. I know I need to turn right to get someplace, but sometimes as a human I'll say left instead: confusing me and the driver where they don't turn right, and AI that realizes who made the mistake would help.

The hard part is the AI needs to be correct when it doesn't something unexpected. I don't know if this is a solvable problem, but it is what I want.

yndoendo 2 days ago | parent [-]

Magic in real life never works 100% of the time. It is all an illusion were some observers understand the trick and others do not. Those that understand it have the potential to break the magic. Even the magician has the ability to fault the trick.

I want reproducibility not magic.

bluGill 2 days ago | parent [-]

It is magic that I can touch a swith on the wall and lights come on. It is magic that I have a warm house despite the outside temperature is near freezing. we have plenty of other magic that works. I want more

nottorp a day ago | parent | next [-]

If your light switch doesn't turn on the lights any more it's probably broken.

If your "AI" light switch doesn't turn on the lights, you have to rephrase the prompt.

Telemakhos a day ago | parent | prev [-]

Electricity, light, and heat aren't magic: they're science. Science is something well understood. Something that seems magical is something poorly understood. When I ask AI a question, I don't know whether it will tell me something truthful, mendacious in a verisimilitudinous way, or blatantly wrong, and I can only tell when it's blatantly wrong. That's magic, and I hate magic. I want more science in my life.

slg 2 days ago | parent | prev | next [-]

>For example, if you close a youtube browser tab with a comment half written it will pop up an `alert("You will lose your comment if you close this window")`. It does this if the comment is a 2 page essay or "asdfasdf". Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input. That is really difficult to do in traditional software but is something an LLM could do with low effort. The end result is I only have to deal with that annoying popup when I really am glad it is there.

The funny thing is that this exact example could also be used by AI skeptics. It's forcing an LLM into a product with questionable utility, causing it to cost more to develop, be more resource intensive to run, and behave in a manner that isn't consistent or reliable. Meanwhile, if there was an incentive to tweak that alert based off likelihood of its usefulness, there could have always just been a check on the length of the text. Suggesting this should be done with an LLM as your specific example is evidence that LLMs are solutions looking for problems.

fragmede 2 days ago | parent [-]

I've been totally AI-pilled because I don't see why that's of questionable utility. How is a regexp going to tell the difference between "asdffghjjk" and "So, she cheated on me". A mere byte count isn't going to do it either.

If the computer can tell the difference and be less annoying, it seems useful to me?

slg 2 days ago | parent | next [-]

Who said anything about regexp? I was literally talking about something as simple as "if(text.length > 100)". Also the example provided was distinguishing "a 2 page essay or 'asdfasdf'" which clearly can be accomplished with length much easier than either an LLM or even regexp.

We should keep in mind that we're trying to optimize for user's time. "So, she cheated on me" takes less than a second to type. It would probably take the user longer to respond to whatever pop up warning you give than just retyping that text again. So what actual value do you think the LLM is contributing here that justifies the added complexity and overhead?

Plus that benefit needs to overcome the other undesired behavior that an LLM would introduce such as it will now present an unnecessary popup if people enter a little real data and intentionally navigate away from the page (and it should be noted, users will almost certainly be much more likely to intentionally navigate away than accidentally navigate away). LLMs also aren't deterministic. If 90% of the time you navigate away from the page with text entered, the LLM warns you, then 10% of the time it doesn't, those 10% times are going to be a lot more frustrating than if the length check just warned you every single time. And from a user satisfaction perspective, it seems like a mistake to swap frustration caused by user mistakes (accidentally navigating away) with frustration caused by your design decisions (inconsistent behavior). Even if all those numbers end up falling exactly the right way to slightly make the users less frustrated overall, you're still trading users who were previously frustrated at themselves for users being frustrated at you. That seems like a bad business decision.

Like I said, this all just seems like a solution in search of a problem.

FridgeSeal a day ago | parent | prev | next [-]

Because in _what world_ do I want the computer making value judgements on what I do?

If I want to close the tab of unsubmitted comment text, I will. I most certainly don’t need a model going “uhmmm akshually, I think you might want that later!”

ori_b a day ago | parent | prev | next [-]

Because the computer behaving differently in different circumstances is annoying, especially when there's no clear cue to the user what the hidden knobs that control the circumstances are.

ChoGGi a day ago | parent | prev | next [-]

What about counting words based on user's current lang, and prompting off that?

Close enough for the issue to me and can't be more expensive than asking an LLM?

MichaelRo 2 days ago | parent | prev [-]

We went from the bullshit "internet of things" to "LLM of things", or as Sheldon from Big Bang Theory put it "everything is better with Bluetooth".

Literally "T-shirt with Bluetooth", that's what 99.98% of "AI" stickers today advertise.

ori_b a day ago | parent | prev | next [-]

> Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input

No, ideally I would be able to predict and understand how my UI behaves, and train muscle memory.

If closing a tab would mean losing valuable data, the ideal UI would allow me to undo it, not try to guess if I cared.

thisislife2 a day ago | parent [-]

Yeah. It's the Apple OS model (we know what's right for you, this is the right way) vs the many other customisable OSes where it conforms to you.

thwarted 2 days ago | parent | prev | next [-]

YouTube could use AI to not recommend videos I've already watched, which is apparently a really hard problem.

mrguyorama a day ago | parent | next [-]

The problem is the people like me who DO rewatch youtube videos. There are a bunch of "Comfort food" videos I turn to sometimes. Like you would rewatch a movie you really enjoy.

But that's the real problem. You can't just average everyone and apply that result to anyone. The "average of everyone" fits exactly NO ONE.

The US Navy figured this out long ago in a famous anecdote in fact. They wanted to fit a cockpit to the "average" pilot, took a shitload of measurements of a lot of airmen, and it ended up nobody fit.

The actual solution was customization and accommodations.

Ekaros 2 days ago | parent | prev | next [-]

It just might be that lot of users watch same videos multiple times. They must have some data on this and see that recommending same videos gets more views than recommending new ones.

thwarted a day ago | parent | next [-]

Is there a way to tell if people are seeking out the same video or or if they are watch it because it was suggested? Especially when 90% of the recommendations are repeats?

There isn't even an "I've watched this" or "don't suggest this video anymore" option. You can only say "I'm not interested" which I don't want to do because it will seems like it will downrank the entire channel.

Even if that is the case, I rarely watch the same video, so the recommendation engine should be able to pick that up.

i386 a day ago | parent | prev [-]

I work for YouTube. You’re hired.

tryauuum a day ago | parent | prev | next [-]

try disabling collecting the history about the videos you've watched in YouTube settings. There are still some recommendations after that but they are less cringe

platevoltage a day ago | parent | prev [-]

My favorite is the new thing where they recommend a "members only" video, from a creator that covers current events, and the video is 2 years old.

ezst a day ago | parent | prev | next [-]

You know what that reminds me very much of? That email client thing that asks you "did you forget to add an attachment?". That's been there for 3 decades (if not longer) before LLMs were a thing, so I'll pass on it and keep waiting for that truly amazing LLM-enabled capability that we couldn't dream of before. Any minute, now.

everdrive 2 days ago | parent | prev | next [-]

Using such an expensive technology to prevent someone from making a stupid mistake on a meaningless endeavor seems like a complete waste of time. Users should just be allowed to fail.

plasticsoprano 2 days ago | parent | next [-]

Amen! This is part of the overall societal decline of no failing for anyone. You gotta feel the pain to get the growth.

anthonypasq 2 days ago | parent | prev | next [-]

if somone from 1960 saw the quadrillions of cpu cycles we are wasting on absolutely nothing every second, they would have an aneurysm

robrain a day ago | parent [-]

As someone from 1969, but with an excellent circulatory system, I just roll my eyes and look forward to the sound of bubbles bursting whilst billionaires weep.

macintux a day ago | parent [-]

When bubbles burst, is it really the billionaires who are hit the hardest? I'm skeptical.

FridgeSeal a day ago | parent [-]

Tell you what, let’s make sure this time it is!

Convince them to sink their fortunes in, and then we just make sure it pops.

AuryGlenz 2 days ago | parent | prev [-]

Expensive now is super cheap 10 years from now though.

publicdebates a day ago | parent | prev | next [-]

> readily discard short or nonsensical input

When "asdfasdf" is actually a package name, and it's in reply to a request for an NPM package, and the question is formulated in a way that makes it hard for LLMs to make that connection, you will get a false positive.

I imagine this will happen more than not.

ambicapter 2 days ago | parent | prev | next [-]

So, like, machine learning. Remember when people used to call it AI/ML? Definitely wasn't as much money being spent on it back then.

nottorp 2 days ago | parent | prev | next [-]

> The end result is I only have to deal with that annoying popup when I really am glad it is there.

Are you sure about that? It will trigger only for what the LLM declares important, not what you care about.

Is anyone delivering local LLMs that can actually be trained on your data? Or just pre made models for the lowest common denominator?

Wowfunhappy 2 days ago | parent | prev | next [-]

> For example, if you close a youtube browser tab with a comment half written it will pop up an `alert("You will lose your comment if you close this window")`. It does this if the comment is a 2 page essay or "asdfasdf". Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input. That is really difficult to do in traditional software but is something an LLM could do with low effort.

I agree this would be a great use of LLMs! However, it would have to be really low latency, like on the order of milliseconds. I don't think the tech is there yet, although maybe it will be soon-ish.

nkrisc 2 days ago | parent | prev | next [-]

It’s because “AI” isn’t a feature. “AI” without context is meaningless.

Google isn’t running ads on TV for Google Docs touting that it uses conflict-free replicated data types, or whatever, because (almost entirely) no one cares. Most people care the same amount about “AI” too.

gt0 2 days ago | parent | prev | next [-]

Would that be ideal though? Adding enormous complexity to solve a trivial problem which would work I'm sure 99.999% of the time, but not 100% of the time.

Ideally, in my view, is that the browser asks you if you are sure regardless of content.

I use LLMs, but that browser "are you sure" type of integration is adding a massive amount of work to do something that ultimately isn't useful in any real way.

2 days ago | parent | prev | next [-]
[deleted]
bluedino a day ago | parent | prev | next [-]

I want AI to do useful stuff. Like comb through eBay auctions or Cars.com. Find the exact thing I want. Look at things in photos, descriptions, etc

I don't think an NPU has that capability.

thombles 2 days ago | parent | prev | next [-]

> you can imagine how a locally run LLM that was just part of the SDK/API developers could leverage would lead to better UI/UX

It’s already there for Apple developers: https://developer.apple.com/documentation/foundationmodels

I saw some presentations about it last year. It’s extremely easy to use.

leonidasv a day ago | parent | prev | next [-]

You don't need a LLM for that, a simple Markov Chain can solve that with a much smaller footprint.

bitwize a day ago | parent | prev | next [-]

At my current work much of our software stack is based on GOFAI techniques. Except no one calls them AI anymore, they call it a "rules engine". Rules engines, like LLMs, used to be sold standalone and promoted as miracle workers in and of themselves. We called them "expert systems" then; this term has largely faded from use.

This AI summer is really kind of a replay of the last AI summer. In a recent story about expert systems seen here on Hackernews, there was even a description of Gary Kildall from The Computer Chronicles expressing skepticism about AI that parallels modern-day AI skepticism. LLMs and CNNs will, as you describe, settle into certain applications where they'll be profoundly useful, become embedded in other software as techniques rather than an application in and of themselves... and then we won't call them AI. Winter is coming.

wtetzner a day ago | parent [-]

Yeah, the problem with the term "AI" is that it's far too general to be useful.

I've seen people argue that the goalposts keep moving with respect to whether or not something is considered AI, but that's because you can argue that a lot of things computers do are artificial intelligence. Once something becomes commonplace and well understood, it's not useful to communicate about it as AI.

I don't think the term AI will "stick" to a given technology until AGI (or something close to it).

tliltocatl a day ago | parent | prev | next [-]

No. No-no-no-no-no. I want predictability. I don't want a black box with no tuning handles and no awareness of the context to randomly change the behavior of my environment.

FridgeSeal a day ago | parent [-]

I’ve seen some thoroughly unhinged suggestions floating around the web for a UI/UX that is wholly generated and continuously adjusted by an LLM and I struggle to imagine a more nightmarish computing experience.

expedition32 2 days ago | parent | prev | next [-]

Honestly some of the recommendations to watch next I get on Netflix are pretty good.

No idea if they are AI Netflix doesn't tell and I don't ask.

AI is just a toxic brand at this point IMO.

goalieca a day ago | parent [-]

This was a really innovative and big deal back in the day.

https://en.wikipedia.org/wiki/Netflix_Prize

It doesn’t fix the content problem these days though.

ryukoposting a day ago | parent | prev | next [-]

Bingo. Nobody uses ChatGPT because it's AI. They use it because it does their homework, or it helps them write emails, or whatever else. The story can't just be "AI PC." It has to be "hey look, it's ChatGPT but you don't have to pay a subscription fee."

zzo38computer a day ago | parent | prev | next [-]

Hopefully, you could make a browser extension to detect if a HTML form has unsaved changes; it should not require AI and LLM. (This will work better without the document including JavaScripts, but it is possible to work with JavaScripts too.)

themafia a day ago | parent | prev [-]

I want a functioning search engine. Keep your goofy opinionated mostly wrong LLM out of my way, please.

notatoad 2 days ago | parent | prev | next [-]

I think they will eventually. It’s always been a very incoherent sales pitch that your expensive PCs are packed full of expensive hardware that’s supposed to do AI things, but your cheap PCs that have none of that are still capable of doing 100% of the AI tasks that customers actually care about: accessing chatGPT.

voidfunc 2 days ago | parent [-]

Also, what kind of AI tasks is the average person doing? The people thinking about this stuff are detached from reality. For most people a computer is a gateway to talking to friends and family, sharing pictures, browsing social media, and looking up recipes and how-to guides. Maybe they do some tracking of things as well in something like Excel or Google Sheets.

Consumer AI has never really made any sense. It's going to end up in the same category of things as 3D TV's, smart appliances, etc.

ryandrake 2 days ago | parent | next [-]

I don't remember any other time in the tech industry's history when "what companies and CEOs want to push" was less connected to "what customers want." Nobody transformed their business around 3D TVs like current companies are transforming themselves to deliver "AI-everything".

walterbell 2 days ago | parent | next [-]

If memory shortages make existing products non-viable (e.g. 50% price increases on mini PCs, https://news.ycombinator.com/item?id=46514794), will consumers flock to new/AI products like OpenAI "pen" or reject those outright?

Tanoc 2 days ago | parent | prev | next [-]

I think it does make sense if you're at a certain level of user hardware. If you make local computing infeasible because of the computational or hardware cost it makes it much easier to sell compute as a service. Since about 2014 almost every single change to paid software has been to make it a recurring fee rather than a single payment, and now they can do that with hardware as well. To the financially illiterate paying a $15 a month subscription to two LLMs from their phone they have a $40 monthly payment on for two years seems like a better deal than paying $1,200 for a desktop computer with free software that they'll use a tenth as much as the phone. This is why Nvidia is offering GForce Now the same way in one hundred hour increments, as they can get $20 a month that goes directly to them, with the chance of getting up to an additional $42 maximum if the person buys additional extensions of equal amount (another one hundred hours). That ends up with $744 a year directly to Nvidia without any board partners getting a cut, while a mid grade GPU with better performance and no network latency would cost that much and last the user five entire years. Most people won't realize that long before they reach the end of the useful lifetime of the service they'll have paid three to four times as much as if they had just bought the hardware outright.

With more of the compute being pushed off of local hardware they can cheapen out on said hardware with smaller batteries, fewer ports and features, and weaker CPUs. This lessens the pressure they feel from consumers who were taught by corporations in the 20th century that improvements will always come year over year. They can sell less complex hardware and make up for it with software.

For the hardware companies it's all rent seeking from the top down. And the push to put "AI" into everything is a blitz offensive to make this impossible to escape. They just need to normalize non-local computing and have it succeed this time, unlike when they tried it with the "cloud" craze a few years ago. But the companies didn't learn the intended lesson last time when users straight up said that they don't like others gatekeeping the devices they're holding right in their hands. Instead the companies learned they have to deny all other options so users are forced to acquiesce to the gatekeeping.

jimbokun a day ago | parent | prev [-]

The customers are CEOs dreaming of a human-free work force.

shermantanktop a day ago | parent [-]

Suggested amendment: the customers are CEOs dreaming of Wall Street seeing them as a CEO who will deliver a human-free work force. The press release is the product. The reality of payrolls are incidental to what they really want: stock price go up.

It's all optics, it's all grift, it's all gambling.

tjr 2 days ago | parent | prev | next [-]

Just off the top of my head of some "consumer" areas that I personally encounter...

I don't want AI involved in my laundry machines. The only possible exception I could see would be some sort of emergency-off system, but I don't think that even needs to be "AI". But I don't want AI determining when my laundry is adequately washed or dried; I know what I'm doing, and I neither need nor want help from AI.

I don't want AI involved in my cooking. Admittedly, I have asked ChatGPT for some cooking information (sometimes easier than finding it on slop-and-ad-ridden Google), but I don't want AI in the oven or in the refrigerator or in the stove.

I don't want AI controlling my thermostat. I don't want AI controlling my water heater. I don't want AI controlling my garage door. I don't want AI balancing my checkbook.

I am totally fine with involving computers and technology in these things, but I don't want it to be "AI". I have way less trust in nondeterministic neural network systems than I do in basic well-tested sensors, microcontrollers, and tiny low-level C programs.

the_snooze 2 days ago | parent [-]

A lot of consumer tech needs have been met for decades. The problem is that companies aren't able to extract rent from all that value.

PunchyHamster 2 days ago | parent | prev | next [-]

I do think it makes some sense in limited capacity.

Have some half decent model integrated with OS's builtin image editing app so average user can do basic fixing of their vacation photos by some prompts

Have some local model with access to files automatically tag your photos, maybe even ask some questions and add tags based on that and then use that for search ("give me photo of that person from last year's vacation"

Similarly with chat records

But once you start throwing it in cloud... people get anxious about their data getting lost, or might not exactly see the value in subscription

fragmede 2 days ago | parent | prev | next [-]

You and I live in different bubbles. ChatGPT is the go-to for my non-techie friends to ask for advice on basically everything. Women asking it for relationship advice and medical questions, to guys with business ideas and lawsuit stuff.

chpatrick 2 days ago | parent | prev | next [-]

Consumer local AI? Maybe.

On the other hand everyone non-technical I know under 40 uses LLMs and my 74 year old dad just started using ChatGPT.

You could use a search engine and hope someone answered a close enough question (and wade through the SEO slop), or just get an AI to actually help you.

jimbokun a day ago | parent | prev [-]

“Do my homework assignment for me.”

extraduder_ire 2 days ago | parent | prev | next [-]

Dell are less beholden to shareholder pressure than others, Michael Dell owns 50% of the company since it went public again.

pmdr 2 days ago | parent | prev | next [-]

Meanwhile we got Copilot in Notepad.

tombert a day ago | parent | prev | next [-]

I think part of the issue is that it's hard to be "exciting" in a lot of spaces, like desktop computers.

People have more or less converged on what they want on a desktop computers in the last ~30 years. I'm not saying that there isn't room for improvement, but I am saying that I think we're largely at the state of "boring", and improvements are generally going to be more incremental. The problem is that "slightly better than last year" really isn't a super sexy thing to tell your shareholders. Since the US economy has basically become a giant ponzi scheme based more on vibes than actual solid business, everything sort of depends on everything being super sexy and revolutionary and disruptive at all times.

As such, there are going to be many attempts from companies to "revolutionize" the boring thing that they're selling. This isn't inherently "bad", we do need to inject entropy into things or we wouldn't make progress, but a lazy and/or uninspired executive can try and "revolutionize" their product by hopping on the next tech bandwagon.

We saw this nine years ago with "Long Blockchain Ice Tea" [1], and probably way farther back all the way to antiquity.

[1] https://en.wikipedia.org/wiki/Long_Blockchain_Corp.

nikanj 2 days ago | parent | prev | next [-]

Companies don’t really exist to make products for consumers, they live to create stock value for investors. And the stock market loves AI

bluGill 2 days ago | parent | next [-]

The stock market as always been about whatever is the fad in the short term, and whatever produces value in the long term. Today AI is the fad, but investors who care about fundamentals have always cared about pleasing customers because that is where the real value has always come from. (though be careful - not all customers are worth having, some wannabe customers should not be pleased)

ehnto 2 days ago | parent | prev | next [-]

As someone pointed out, Dell is 50% owned by Michael Dell. So it's less influenced by this paradigm.

vel0city 13 hours ago | parent | prev | next [-]

The will of the stock market doesn't influence Dell, they're a privately held corporation. They're no longer listed on any public stock market.

sieabahlpark 2 days ago | parent | prev [-]

[dead]

ivanjermakov 2 days ago | parent | prev | next [-]

Treating consumers as customers, good.

PunchyHamster 2 days ago | parent | prev [-]

There is place for it but it is insanely overrated. AI overlords are trying to sell incremental (if in places pretty big) improvement in tools as revolution.