Remix.run Logo
mg 4 days ago

The question is how fast.

I remember in the late 90s, me and some friends discovered the internet and went "Woah! Letters, books, shops, work, study, gaming ... pretty much everything will take place in this virtual realm soon!".

We were right. Except for the "soon".

When the first internet café opened, we started spending a lot of our time there, meeting lots of interesting people. Everything outside the internet already seemed anachronistic to us.

What I didn't expect back then was that even replacing letters with email (which already worked!) would take another painstaking ... 20 years!

Today, my work is already massively transformed via neural networks. Pretty much everything I do starts with an interaction with a neural network. Usually a question to an LLM. And then LLMs or other networks are involved all the way up to finishing the task. Thinking about how I worked before these large neural networks came up makes me shiver. How cumbersome. How anachronistic.

But when I talk to people outside the tech sphere, most are not using LLMs or other neural networks at all.

Everything humans do will be deeply transformed. Even more so than via the internet. Will it take 20 years again? Or is technological progress accelerating and this time it will take only 10 years? So far, my feeling is that it will be a bit faster, but not twice as fast.

But maybe it will be even more surprising. One future I can imagine is that people don't even need to adapt. Maybe we will see software simply do all aspects of a job. One can already imagine it for driving cars or making movies. It is harder to imagine it for managing a company, for example. But who knows. LLMs are already surprisingly good at creating chains of thoughts. Maybe we'll find ourselves in an unexpected future in a few years already, where human involvement in anything is just a burden to the process.

zer00eyz 4 days ago | parent | next [-]

> Pretty much every thing I do starts with an interaction with a neural network.

Pretty much nothing I do starts this way.

Look LLM's are interesting. I sure spend a lot less time writing basic one off scripts because of them. The "extra step" of tossing emails to an LLM is just proof reading with less tedium.

LLMs gave every one an intern that does middling work quickly, never complains and doesn't get coffee.

We need them to be cheap (to run) and localy/owned hardware (for security and copy right reasons).

4 days ago | parent | next [-]
[deleted]
ericd 4 days ago | parent | prev [-]

If you go spend $5k on a MacBook Pro m4 max with 128 gigs of ram, and toss on Ollama with Qwen2.5-72b, you have your local LLM, free to run as much as you like.

At first glance that might seem expensive, but then consider how insane it is that you can ask your laptop arbitrary questions and have it respond with really cogent answers, on almost any topic you can think of, without relying on a massive rack of gpu machines behind an api. It uses barely more power than an old incandescent bulb while doing it!

rohansood15 4 days ago | parent | next [-]

This. Some of the biggest arguments against AI/LLMs being ready for prime time are a result of ignorance around the current SoTA.

JoeMattiello 4 days ago | parent | prev [-]

This. I’m so glad I got a lower cpu but maxed my ram. I use llms over search engines.

Even just getting cursor or boltai and you have unlimited llm cloud access for the price of a few coffees.

rubymamis 4 days ago | parent [-]

Hi! Just curious, you use boltai as a client for your on-device LLM?

Iulioh 4 days ago | parent | prev | next [-]

I don't think it will be as massive as you and a lot of people think.

The problem is reliability,you can't bulid of unstable foundations.

We would need some kind of library of "verified sentences" in LLMs, like facts verified by experts for them to be useful for a tons of stuff.

Right now if you are using LLMs for anything else that is not verifiable in a objective way (like programming, a program works or does not [let's not even talk efficency or security]) or fluff (generating fancy text for non crucial reasons) you are using a tool wrong.

simonw 4 days ago | parent | next [-]

If you want to work with verified facts, feed those facts in as part of your prompt. Most good LLMs accept 100,000 tokens now, and some accept 1,000,000+.

Sateeshm a day ago | parent [-]

There is still no guarantee that what comes out will be fact based.

simonw a day ago | parent [-]

Right, if you want guarantees of anything you don't want statistical machine learning models.

In practice, I've found that the risk of LLMs hallucinating against well chosen context in low enough that I rarely worry about it.

rohansood15 4 days ago | parent | prev | next [-]

That's not true. The Internet was inherently unreliable for a long time—connection drops, packet losses, hardware failures—but that didn't stop it from being a platform for incredible value.

You can build valuable, reliable systems on top of unreliable foundations. That's how humanity has progressed over the centuries.

Iulioh 4 days ago | parent [-]

It's still unreliable because humans are but a human lying or saying incorrect things is different from an AI confidently doing it.

First of all humans or websites have reputations, with GPT you just hit refresh and you are talking to a entirely different entity and everything they said is gone.

I feel like there's a difference.

jajko 4 days ago | parent | prev [-]

Yeah we are seeing a massive hype wave, with tons of folks trying new ideas and startups and obviously place like HN can make you feel a massive revolution across whole mankind is happening, on at least the level of internet coming. Novelty for most folks will eventually wear off though, this ain't cute cats videos level of continuous interest for the masses, i'd say more like wikipedia, now its just the place to get some unbiased info.

When reality is you get a bit smarter assistant than ever before, still not very trustworthy but already can be saving some time if used right. Good luck hearing this ever from folks riding that hype wave.

ryandrake 3 days ago | parent [-]

Definitely a hype wave, and we've seen these things come and go before. Crypto, NFTs, Blockchain, Virtual Assistants, Autonomous Vehicles, Wearables, Big Data, VR/AR, 3D TV. In five years, we'll be able to add LLMs/AI to the list.

I'm sure if we look, we can find a five year old HN post that starts out: "My work is already massively transformed via crypto. Pretty much everything I do starts with an interaction with a blockchain."

rcarmo 4 days ago | parent | prev | next [-]

In the mid-90s, I decided to _not_ pick AI as a major because we had just gone through the original AI winter and most academic research on the topic was getting defunded in favor of high-speed networking and video conferencing.

To this day, and even though I am deeply involved with AI, I do not regret that decision—but I am constantly reminded that the expectations practitioners have are _way_ off what most people will actually want.

cryptoz 4 days ago | parent | prev | next [-]

I think they are, though, just not consciously. I’m finding more and more it is rare that I find myself typing on a device and there isn’t some AI lurking in it. Email, docs, mobile keyboards, soon to be also Apple intelligence searching your photos, etc.

I can’t even type a note to myself in gmail without “polish!” Popping in to tell me that I’m shit at writing a good note-to-self and it needs polishing. (Wherein it typically reforms to , “dear customer service, …”)

Every Google search is using llm now right? At least I can’t figure out how to stop it from happening.

conradludgate 4 days ago | parent | next [-]

I made a note tool last week that was designed with this problem in mind. Push the polishing step to later.

It's an append only system and the happy path for note taking is a simple keybind pop up that only displays 30 characters of input on screen. I cannot go back and edit the notes later, but I can open a larger app to write new ones that reference(/supercede) the originals, which allow me to refine the note

xk_id 4 days ago | parent | prev [-]

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

benedictevans 4 days ago | parent | prev | next [-]

See slides 58 and 59 - this can take a while.

ChatGPT got to 100m users much faster than anything else because it's riding on all the infrastructure we already built in the last 20 years. To a consumer, it's 'just' a website, and you don't have to wait for telcos to build broadband networks or get everyone to buy a $600 smartphone.

But, most people go to the website and say 'well, that's very cool, but I don't know what I'd use it for'. It's very useful for coding and marketing, and a few general purposes, but it isn't - YET - very helpful for most of the things that most people do all day. A lot of the presentation is wondering about this.

fragmede 4 days ago | parent | next [-]

Only OpenAI knows for sure, but so many non-tech people I know use ChatGPT for a sounding board for whatever. "My boyfriend sent me this text, how should I respond?" or "Teach me about investing." There are a bunch of people I know that don't use ChatGPT, I'm just surprised at the uptake by people who I didn't think would have as use for it have found it very useful.

mg 4 days ago | parent | prev [-]

How long is a while and what is it, that most people do all day?

A quick Google search for "most common job" came back with

    Cashier

    A cashier works in a retail environment and
    processes transactions for a customer's purchase.
I wouldn't be surprised if robots can do that on their own in 10 years.
Gud 4 days ago | parent [-]

Robots can already do that, they are used at large chains (McDonalds) and they are used all the time.

What they can't do is call the police when the hobo gets too wild, can't fix the inevitable bug in the process(by doing some 4th level menu bypass) and other random stuff that might pop up.

And when the robot can do all that humans are no longer viable as economic entities and will be out competed.

delfinom 2 days ago | parent [-]

The problem is, the robot has to know what I want it to do without me having to dictate it.

That's the beauty of human interaction, it can't be massively truncated down to just even finger pointing.

sincerecook 3 days ago | parent | prev | next [-]

The Internet was fun and interesting because there were people on it creating fun and interesting projects and there were very few restrictions. Now it's just a shopping mall where people shout at each other.

AI is a trash compactor run through an HR department. It's already neutered and censored to hell. If you want to get the official, approved version of something, ask an AI. If you think that's fun, you've probably forgotten what it was like to be young.

th0ma5 4 days ago | parent | prev | next [-]

I think this is a good observation of the past, but applying these patterns to AI is like when people applied them liberally to cryptocurrency and claimed it would replace the internet protocol, which was a certain kind of exuberance of the ill informed.

simonw 4 days ago | parent [-]

The difference, as always, is that AI is useful - in many different ways.

Crypto was useful for speculative investment and occasionally for working around money transfer restrictions.

jazzyjackson 2 days ago | parent | next [-]

IMO they are the same, they make certain things easier than they used to be. Not everyone needs to make transactions without a bank looking over their shoulder, and I don't personally know anyone whose life is changed by being able to type 100x faster (or conversely, produce summaries 1/100th the length of the input)

I continue to try applying small models to tasks life producing structured data from unstructured (using lmstudio's schema parameter) and it's really neat that I can extract data from work emails, but every once in a while it will hallucinate a new phone number when the one I needed was in the prompt, so it still hasn't met the kind of reliability I want in order to actually automate stuff where dollars are on the line

xk_id 4 days ago | parent | prev [-]

What AI is useful for makes cryptocurrencies look marginally more useful by comparison.

illiac786 3 days ago | parent | prev | next [-]

I agree with everything except the use of the word “anachronistic”. It’s using LLM for everything today that is anachronistic, because almost everyone today does not ;)

tapirl 4 days ago | parent | prev | next [-]

> ... replacing letters with email ...

Some true, but not totally. The role of email has many differences from old-age letters. And letters are not only replaced with email, but also more modern communication manners. And the more modern communication manners not only replace letters, but telephone etc.

And the letter case is specified case. There are many old-age things which are not replaced totally.

The same is for AI. It will replace something, but not all.

interludead 4 days ago | parent | prev | next [-]

Even when the potential is obvious, it takes time for people to adapt, trust and integrate it into daily life

epigramx 4 days ago | parent | prev | next [-]

If everything you do starts by asking an LLM, then you start with superficial research, because frankly it was never anything better than a fancy google search.

zifpanachr23 4 days ago | parent | prev | next [-]

This is probably related to cynicism (but also maybe wisdom from a life that at times was not easy) I've developed over the years, but I can't help but get seriously dystopian vibes from your post. More than the usual boosters, because it seems to be a little more honest and a little less cognizant of what I think the real moral hangups are that people tend to have with AI. So you seem genuine in some sense in a way many others aren't.

I don't for one second really suspect it will be the case (not for the usual technical criticisms although I'm skeptical there as well, but more that I don't think it would be socially sustainable for an extended period of time)...but let's for a moment take your last paragraph at face value and in good faith.

I mean...what exactly is it that you are advocating for or accepting? Even if we get some kind of very generous UBI, there's something about human nature that makes me suspect the consequences of this would be an almost guaranteed miserable existence for pretty much everyone.

Even in the best case scenario, where the results of this transformation are kept under control and distributed in a reasonable manner and the whole thing doesn't cause a social and political meltdown...what is everybody going to do? There's some amount of wisdom in the old saying that "idle hands are the devil's plaything".

Thats the real issue I am the most concerned about and that seems to be the least often addressed by big AI boosters and detractors (I realize both of these camps often have ulterior motives). I suspect many are feeling some amount of concern like that...why is this (I would argue most fundamental) question about the impact of AI never talked about?

I don't want to hear anything about some big terminator style fight against AI or about how wonderful and unpredictable the inevitable future of WALL-E style luxury gay space communism is going to be...none of those discussion points get to the heart of what makes many people so uncomfortable with the concept...and I think the people believing in some version of that second scenario being at all socially plausible is what gives me the most pause. It makes the terminator scenario almost seem like a preferred outcome if we were given a binary choice...in reality I think most would prefer neither and would agree with me in saying that we aren't even discussing the right issues w.r.t an "AI gets much better" potential future.

mg 4 days ago | parent [-]

I'm not advocating, just trying to look into the future.

Superhuman AI seems to be a building block of it. And deep transformation of how we work will come with it.

So I raised the question of the timeline.

That's all.

zifpanachr23 4 days ago | parent [-]

Fair enough. I guess the "deep transformation of how we work" part is what I don't really understand. Like I said, there's certainly some cynicism I've got and so I am going to have a tendency to extrapolate potential negatives from everything left unsaid about that "transition".

I think it's partly these hangups about the situation that I have that give me the tendency to assume a post like yours is necessarily "advocating" as well, so my apologies if that was not your intent.

mg 4 days ago | parent [-]

    I guess the "deep transformation of how we work"
    part is what I don't really understand.
At the moment, it looks like we'll either become completely obsolete in the context of work, or we'll morph into cyborgs with vastly greater knowledge and processing capabilities than we have today.
zifpanachr23 4 days ago | parent [-]

Right. Those both sound horrifying to me.

I suppose I'll leave it at that.

If those don't terrify you then I guess we just don't really have similar temperament as people and that's totally okay with me. I did enjoy discussing it with you.

dyauspitr 4 days ago | parent | prev [-]

I’m the same. I haven’t read the responses to your comment yet but I guarantee there are some folks pulling out their hair in disbelief about what you could possibly be using LLMs for, how the code can never stand up to what they’re writing etc. I don’t understand how you can’t find chatGPT useful. I use it at least 30 times in any given day.

somerandomqaguy 4 days ago | parent | next [-]

It's hilariously wrong at times, but the problem is when people take what an LLM model spits out as fact.

Just an example was in a mild debate I was having regarding cars, the other person posed a question about how fast a Golf Type R could get in a specific distance and Chat GPT spat out a number that the other person accepted as fact, but I already knew it was too high.

What ChatGPT done was taken the posted 0 to 60 time and extrapolated a linear distance vs velocity formula. Which was impressive granted, but wrong; velocity over distance is logarithmic at best.

It's a great tool, but I think a lot of people are just taking what it spits out without slowing down to question if the output makes sense or not.

zifpanachr23 4 days ago | parent | prev [-]

It's highly dependent on what you are using it for, so I think the variability in usefulness is totally predictable.

That doesn't make me some fancy scientist level programmer (I'm definitely not) that I often find my attempts at using AI falling into that category...a lot of the time it's just due to niche platforms and libraries and things that are specific to our shop or the regulatory environment or a thousand other issues of that nature. I imagine that similar issues are incredibly widespread for basically anybody that is not doing greenfield work that is somewhat isolated and at young companies and isn't spending tens of millions to do custom training on their specific environments.

The whole "everything web, most everything open source, ship ship ship new code" style work environments you tend to find among young start ups are not as common as I think they seem if you guage your view of technology jobs based off of hacker news. Given that most of the training of the most powerful models is basically scraping the web, it's not at all surprising that they are seriously lacking in other areas. And I'm not sure to what extent they can seriously be expected to improve there...besides the obvious issue of uploading internal documentation to give an external LLM better prompting...the thing has still got to be able to use public training data to make predictions about internal libraries and whatnot that may very well be old or anachronistic or batshit crazy, because the difference in volume of data between say your internal software and everything posted publicly on the internet is massive.