Remix.run Logo
Nevermark 3 days ago

> The future of human content is being as human as possible. AI isn't going to replace that in our lifetimes, if ever.

I have been working on machine learning algorithms for a long time. Since the time when telling someone I worked on AI was a conversation killer, even with technical people.

AI's are going to understand people better than people understand people. That could be in five years, maybe - many things are happening faster than expected. Or in 15 years. But that might be the outside range.

There is something about human psychology where the faster something changes, the less we are aware of the rate of change. We don't review the steps and their increasing rate that happened before we cared about that tech. We just accept the new thing that finally gets our attention like it was a one off event, instead of an accelerating compounding flood, and imagine it isn't really going to change much soon.

--

I know this isn't a popular view.

But what is happening is on the order of the transition to the first multi-cellular creatures, or the first bodies with specialized cells, the first nervous systems, the first brains, the first creatures to use language. Far bigger than advances such as writing or even the Internet. This is a transition to a completely new mode for the substrate of life and intelligence. The lack of dependency on any particular substrate.

"We", the new generation of things we are building, will have none of the limits and inefficiencies of our ancient slow DNA-style life. Or our stark physical bottlenecks on action, communication, or scaling, and our inability to understand or directly edit our own internal traits.

We will have to face many challenges in the coming years. It can't hurt to mindfully face them earlier than later.

arolihas 3 days ago | parent | next [-]

What machine learning algorithm have you worked on that leads you to believe they are capable of having a rich internal cognitive representation anywhere close to any sentient conscious animal?

Lerc 3 days ago | parent [-]

If you pick any well performing AI architecture, what would lead you to believe that they are not capable of having a rich internal cognitive representation?

The Transformer, well... transforms, at each layer to produce a different representation of the context. What is this but an internal representation? One cannot assess whether that is rich or cognitive without some agreement of what those terms might mean.

LLMs can seemingly convert a variety of languages into an internal representation that encompasses the gist of any of them. This would at least provide a decent argument that the internal representation is 'rich'

As for cognitive? What assessment would you have in mind that would clearly disqualify something as a non-cognitive entity?

I think most people working in this field who are confident feel that they can extend what they know now to make something that looks like a duck, walks like a duck, and quacks like a duck. If that is achieved, on what basis does anyone have to say "But it's not really a duck"?

I'm ok with people saying AI will be never able to perform that well because it doesn't have X, as long as they accept that if it does, one day, perform that well they accept that either X is present, or that X is not relevant.

arolihas 2 days ago | parent | next [-]

If you think we're only our observable behaviors or that is the only relevant thing to you then I don't think it's worth getting into this argument. Consider this excerpt from https://scottaaronson.blog/?p=7094#comment-1947377

> Most animals are goal-directed, intentional, sensory-motor agents who grow interior representations of their environments during their lifetime which enables them to successfully navigate their environments. They are responsive to reasons their environments affords for action, because they can reason from their desires and beliefs towards actions.

In addition, animals like people, have complex representational abilities where we can reify the sensory-motor “concepts” which we develop as “abstract concepts” and give them symbolic representations which can then be communicated. We communicate because we have the capacity to form such representations, translate them symbolically, and use those symbols “on the right occasions” when we have the relevant mental states.

(Discrete mathematicians seem to have imparted a magical property to these symbols that *in them* is everything… no, when I use words its to represent my interior states… the words are symptoms, their patterns are coincidental and useful, but not where anything important lies).

In other words, we say “I like ice-cream” because: we are able to like things (desire, preference), we have tasted ice-cream, we have reflected on our preferences (via a capacity for self-modelling and self-directed emotional awareness), and so on. And when we say, “I like ice-cream” it’s *because* all of those things come together in radically complex ways to actually put us in a position to speak truthfully about ourselves. We really do like ice-cream.

Lerc 2 days ago | parent [-]

> And when we say, “I like ice-cream” it’s because all of those things come together in radically complex ways to actually put us in a position to speak truthfully about ourselves. We really do like ice-cream.

Ok, now prove this is true. Can you do so without invoking unobservable properties? If you can, then observable is all that matters, if you cannot then you have no proof.

arolihas 2 days ago | parent [-]

Do I seriously have to prove to you that you like ice cream? Have you tried it? If you sincerely believe you are a husk whose language generation is equivalent to some linear algebra then why even engage in a conversation with me? Why should I waste my time proving to you a human that you have a human experience if you don’t believe it yourself?

Lerc 2 days ago | parent [-]

You don't need to prove to me that I like ice cream. You need to prove to me that you like ice cream. That you even have the capacity to like. Asserting that you have those experiences proves nothing since even a simple basic program 10 print "I like Ice Cream" can do that.

How can you reliably deny the presence of an experience of another if you cannot prove that experience in yourself?

arolihas 2 days ago | parent [-]

I actually don’t need to prove to you that I’m more than a BASIC program. I mean listen to yourself. You simply don’t live in the real world. If your mom died and we replaced her with a program that printed a bunch of statements that were designed to as closely mimic your conversations with her as much as possible you wouldn’t argue hey this program is just like my mom. But hey maybe you wouldn’t be able to tell the difference behind the curtain so actually it might as well asbe the same thing in your view, right? I mean who are we to deny that mombot is just like your mom via an emergent pattern somewhere deep inside the matrices in an unprovable way /s. Just because I can’t solve the philosophical zombie problem for you at your whim to your rigor doesn’t mean a chatbot has some equivalent internal experience.

Lerc 2 days ago | parent [-]

I'm not claiming that any particular chatbot has an equivalent experience, I'm claiming there is no basis beyond its behaviour that it does not.

With the duplicate mother problem, if you cannot tell then there is no reason to believe that it is not a being of equivalent nature. That is not the same as identity, for a layman approach to that viewpoint, see Star Trek: TNG, Season 6, Episode 24. A duplicate Will Riker is created but is still a distinct entity (and one might argue, more original since has been transported one fewer times). Acting the same as is not the same as being the same entity. Nevertheless it has no bearing on whether the duplicate is a valid entity in its own right.

You feel like I'm not living in the real world, but I am the one asking what basis we have for knowing things. You are relying on the presumption that the world reflects what you believe it to be. Epistemology is all about idetifying exactly how much we know about the world.

arolihas a day ago | parent [-]

Ok you can have your radically skeptic hard materialist rhetoric. I just don’t take it seriously, and I don’t think you do either. It’s like those people who insist there is no free will and yet go about their day clearly making choices and exercising their free will. If you want to say technically everyone might as well be a philosophical zombie just reacting to things and your internal subjective experience is an illusory phenomenon, fine you can say that as much as you want. In turn I’ll just give up here because you don’t even have a mind that could be changed. I can sit here and claim you’re the equivalent of a void that repeats radically skeptic lines at me. Maybe a sophisticated chatbot or program. Or maybe you’re actually a hallucination since I can’t prove anything really exists outside of my sense. In which case I’m really wasting my time here.

Lerc 15 hours ago | parent [-]

Well I'm a combatibilist, So I certainly believe in free will. I also will accept any entity that consistently acts as if it has a will that it does actually have that. That's has always been my point, treating things as what they appear to be is the only rational approach when you cannot prove or disprove the existence of the property in question.

It follows from that that you cannot exclude something that appears to have a property if you cannot prove it doesn't or even prove it if it does.

arolihas 10 hours ago | parent [-]

Fair enough. I wouldn’t say a program is acting with a will of its own just because it’s trained to respond to questions in a human like way. That doesn't even say anything about its capacity to have a will. Language is a tool that can convey internal state, not the thing itself.

keiferski 3 days ago | parent | prev [-]

This is basically the Turing test, and like the Turing test it undervalues other elements which can allow for differentiation between “real” and “fake” things. For example - if we can determine that a thing that looks, walks, and quacks like a duck, but doesn’t have the biological heritage markers (that we can easily determine) then it won’t be treated as equivalent to a duck. The social desire to differentiate between real and fake exists and is easily implementable.

In other words: if AIs/robots ever become so advanced that they look, walk, and talk like people, I expect there to be a system which easily determines if the subject has a biological origin or not.

This is way down the line, but in a closer future this will probably just look like verifying someone’s real world identity as a part of the social media account creation process. The alternative is that billion dollar corporations like Meta or YouTube just let their platforms become overrun with AI slop. I don’t expect them to sit on their hands and do nothing.

nyokodo 3 days ago | parent | prev | next [-]

> AI's are going to understand people better than people understand people.

Maybe, but very little of the “data” that humans use to build their understanding of humans is recorded and available. If it were it’s not obvious it would be economical to train on. If it were economical it’s not obvious that current techniques would actually work that well and by definition no future techniques are known to work yet. I’m not inclined to say it will never happen but there are a few reasons to predict it’ll prove to be significantly harder to build AI that gets out of the uncanny valley that it’s currently in.

Nevermark 3 days ago | parent [-]

You are describing the current state of AI as if it were a stable point.

AI today is far ahead of two years ago. Every year for many years before that, deep learning models broke benchmark after benchmark before that breakout.

There is no indication of any slow down. The complete reverse - we are seeing dramatic acceleration of an already fast moving field.

Both research and resources are pouring into major improvements in multi-modal learning and learning via other means than human data. Such as reinforcement learning, competitive learning, interacting with systems they need to understand via simulated environments and directly.

nyokodo 3 days ago | parent | next [-]

> You are describing the current state of AI as if it were a stable point.

No I’m not, I’m just not assuming that the S curve doesn’t exist. There’s no guarantee that research results in X orders of magnitude of improvement that will result in AI being better at understanding humanity than humans in the 5 to 15 year timeframe. There’s no guarantee that compute will continue to consistently grow in volume and lower in price, and a few geopolitical reasons why it might become rarer and prohibitively expensive for some time. There’s no reason to assume capital will stay as available to further both AI techniques and compute resources should there be any sign that investments might not eventually pay off. There’s also no reason to assume the global regulatory environment will remain amenable to rapid AI development. Maybe the industry threads all these needles, but there’s good reason to predict it’ll prove doesn’t.

NemoNobody 3 days ago | parent | prev [-]

They are not using AI correctly to create such models. I'm not sure I want AGI right away or even at all, so I'm keeping my epiphany close for now but in the current field of AI nothing er wany will come of this bc it's not the right way.

As soon as the incredibly obvious, far too obvious realization is had, AI will make huge, tremendous leaps overnight essentially. Til then, these are just machine like software, the best we've ever made but nothing more than that.

tdeck 3 days ago | parent | prev | next [-]

This makes me think about attention span. Scenes in movies, sound bites, everything has been getting shorter over the decades. I know mine has gotten shorter, because I now watch highly edited and produced videos at 2x speed. Sometimes when I watch at 1x speed I find myself thinking "why does this person speak so slowly?"

Algorithmic content is likely to be even more densely packed with stimuli. At some point, will we find ourselves unable to attend to content produced by a human being because algorithmic content has wrecked our attention span?

keiferski 3 days ago | parent | prev | next [-]

Sorry but I don’t think this is much evidence of anything. The point at which an AI can imitate a live-streaming human being is decades away. By then, we will almost certainly have developed a “real Human ID” system that verifies one’s humanity. I wrote about this more here:

https://news.ycombinator.com/item?id=42154928

The idea that AI is just going to eat all human creative activity because technology accelerates quickly is not a real argument, nor does it stand up to any serious projections of the future.

CuriouslyC 3 days ago | parent | next [-]

AI is already eating their way up the creative ladder, this is 100% irrefutable. Interns, junior artists, junior developers, etc are all losing jobs to AI now.

The main problem for AI is it doesn't have a coherent creative direction or real output consistency. The second problem is that creativity thrives on novelty but AI likes to output common things. The first is solvable, probably within 5 years, and is going to hollow out creative departments everywhere. The second is effectively unsolvable, though you might find algorithms that mask it temporarily (I'm not sure if this is any different than what humans do).

We're going to end up with "rock star" teams of creative leads who have more agility in discovering novelty and curating aesthetics than AI models. They'll work with a small department comprised of a mix of AI wranglers and artisans that can put manual finishing touches on AI generated output. Overall creative department sizes are probably going to shrink to 20% of current levels but output will increase by 200%+.

conartist6 3 days ago | parent [-]

How can you both think AI will do a soulless garbage job and that it will displace all the creative people who put their blood sweat and tears into doing art.

If you think it can be 20 times easier to make a movie, then it seems to me that it would be 20 times less impactful to make a creative work, since the market should quickly react to creative success by making a ton of cheap knockoffs of your work until the dead horse is so thoroughly beaten that it's no longer even worth paying an AI to spit out cheap knock-offs

lmm 3 days ago | parent [-]

> How can you both think AI will do a soulless garbage job and that it will displace all the creative people who put their blood sweat and tears into doing art.

They don't, that's why they're saying 20% of creative departments will remain. The part that will go is the part that's already soulless - making ads in the style of the current trendy drama series or what have you.

> If you think it can be 20 times easier to make a movie, then it seems to me that it would be 20 times less impactful to make a creative work, since the market should quickly react to creative success by making a ton of cheap knockoffs of your work until the dead horse is so thoroughly beaten that it's no longer even worth paying an AI to spit out cheap knock-offs

That seems pretty backwards, given Jevons paradox. The ease of writing knock-off fanfiction didn't mean people stopped writing novels. The average novel probably has a lot less impact now than in the past, but the big hits are bigger than ever.

mattmaroon 3 days ago | parent | prev [-]

How do you know it is decades away? A few years ago did you think LLMs would be where they are today?

Is it possible you’re wrong?

keiferski 3 days ago | parent [-]

Of course it’s possible I’m wrong. But if we make any sort of projection based on current developments, it would certainly seem that live-streaming AI indistinguishable from a human being is vastly beyond the capabilities of anything out today, and given current expenses and development times, seems to be at least a few decades in the future. To me that is an optimistic assumption, especially assuming that live presence or video quality will continue to improve as well (making it harder to fake.)

If you have a projection that says otherwise, I’d be glad to hear it. But if you don’t, then this idea is merely science fiction.

Making predictions about the future that are based on current accelerating developments are how you get people in the 1930s predicting flying cars by 2000.

jodrellblank 3 days ago | parent | next [-]

You imply that the technological development will stop, but that's not what happened to flying cars - they do and could exist. Since the 1930s the technology didn't stop developing - aircraft went jet powered, supersonic, to the edge of space, huge, light, heavy, more efficient, more affordable, safer; cars went faster, more reliable, safer, more efficient, bigger, smaller, more capable, self driving; fuel got more pure, engines got more power per kilogram, computer aided design and modelling of airflow and stress patterns was developed, stronger lighter materials were developed; flying cars have been built:

Klein Vision AirCar, 2022: https://www.youtube.com/watch?v=5hbFl3nhAD0

Predecesor, the AeroMobil from 2014, from prototypes starting 1990: https://en.wikipedia.org/wiki/AeroMobil_s.r.o._AeroMobil

The Terrafugia Transition 'roadable aircraft' from 2009: https://en.wikipedia.org/wiki/Terrafugia_Transition

Moller SkyCar, of course, which never got to free flight, but was built and could fly.

The problems are regulatory, cost, safety, massive amounts of human skill needed, the infrastructure needed, the lack of demand, etc. A Cessna 182 weighs <900 Kg, a Toyota Corolla weighs >1400 Kg and has no wings, no tail. But if we collectively wanted VTOL flying cars enough to put a SpaceX level of talent and money at them, and were willing to pay more, work harder, for less comfort and more maintenance, we could have them. Bit like Hyperloop; a low-pressure tunnel with carriages rushing through it is not impossible, but it's got more failure modes, more cost, more design difficulties and almost no benefits over high speed train and maglev.

keiferski 3 days ago | parent | next [-]

I didn't mean to imply that, but that's my fault for just quickly using it as an example.

I do think there will be some slowdown of the technological development, but I think the situation will be similar to flying cars: regulations, social behavior, etc. will prevent AI from simply devouring everything. Specifically, I expect there to be a system which verifies humanity, and thus popular content will ultimately end up being verified-as-real content.

More on that in this other comment: https://news.ycombinator.com/item?id=42154928

jodrellblank 2 days ago | parent [-]

I don't think regulations and social behaviour could stop a Skynet style "AI go FOOM" future. It might be able to stop LLM filler from covering the internet, but given how it can't stop carbon dioxide filler from the atmosphere and "AI text" is less measurable, less easy to stop, and has less clear consequences, I'm not sure I'd bet on it.

Possibly in the sense of going to a small server where ID is verified and hiding away on Discord from the rest of the internet. That would be more like building a bunker to hide from the flying cars, rather than flying cars never happening.

player1234 2 days ago | parent | prev [-]

Do 1970 instead, over50 years of stagnation.

nuancebydefault 3 days ago | parent | prev [-]

For many people current AI created videos are already confused for real videos and vice versa.

When you follow up on technology by browsing HN, and see the latest advancements, its easier to see or hear the differences, because you know at what to look.

If I see on tv some badly encoded video, especially in fog or water surfaces, it immediately stands out, because I was working with video decoding during the time it was of much less quality. Most people will not notice.

unraveller 3 days ago | parent | prev | next [-]

People are judging AI by what abusers of AI put out there for lols and what they themselves can wring out of it. They haven't yet seen what a bunch of AAA professionals with nothing to lose can build and align.

segasaturn 3 days ago | parent [-]

Billions of dollars and all of Silicon Valley's focus has been spent over the last 2 years trying to get AI to work, the "AAA professionals" are already working on AI and I still have yet to see an AI generated product that's interesting or compelling.

hnthrowaway6543 3 days ago | parent | next [-]

The cornerstone of genAI hype is "AI for thee, not for me"

it's filled with tech people who are fucking morons thinking that everyone else is really dumb and loves slop and their algorithm will generate infinite content for the masses, making them rich. yet they don't consume it themselves, and aren't smart enough to recognize the cognitive dissonance

anyone who thinks unsupervised AI content is going to replace [insert creative output here] shouldn't be in the HN comment section, they should be using ChatGPT to generate comments they can engage with

instead they're going to get mad about me calling them a fucking moron in this comment. which, like, why get mad, you can go get an LLM to generate a comment that's much nicer and agrees with you

artistic_regard 3 days ago | parent | next [-]

> anyone who thinks unsupervised AI content is going to replace [insert creative output here] shouldn't be in the HN comment section, they should be using ChatGPT to generate comments they can engage with

Actually, their handlers should just be better monitoring their internet usage

lmm 3 days ago | parent | prev | next [-]

Do you think the people who make TikTok or Clash of Clans (or, hell, the Daily Mail) use their own products?

hnthrowaway6543 2 days ago | parent [-]

uh yes? Was this supposed to be some clever "gotcha!" that ended up being really stupid? Might want to run your clever comments by ChatGPT before you post next time chief

lmm 2 days ago | parent [-]

> uh yes?

Well, you're wrong, I know some of them.

hnthrowaway6543 2 days ago | parent [-]

oh word? i know a guy at Microsoft who doesn't use Excel, that means nobody at Microsoft ever uses Excel and they're just peddling crap to the dumb masses

unethical_ban 2 days ago | parent | prev [-]

Your argument is weak.

hnthrowaway6543 2 days ago | parent [-]

the fact that you're engaging w/ a random human on the internet proves that my argument is strong

if you don't like it, go talk to Claude about it

Claude will even use proper punctuation & grammar which i'm not doing. it's LITERALLY an objective improvement over this comment. why the hell are you engaging with this crap?

unraveller 2 days ago | parent | prev [-]

No AAA professionals in the entertainment profession have built a gen AI model up through their own vision that I'm aware of. AAA Nerds build the next version to distract away from flaws of the last and hope the api will get heavy use. I wouldn't expect output to be compelling with that attitude. I expect repulsive or parody clips to continue until some great creatives feel like building their replacements properly.

burnished 3 days ago | parent | prev | next [-]

Yes, this seems possible. I only wonder if it is too fragile to be self perpetuating. But we're here after all, and this place used to be just a wet rock.

3 days ago | parent | prev | next [-]
[deleted]
topato 3 days ago | parent | prev [-]

I find your ideas intreguing, and I wish to subscribe to your newsletter