Remix.run Logo
ants_everywhere 4 days ago

> We need to move past the humans vs ai discourse it's getting tired.

You want a moratorium on comparing AI to other form of intelligence because you think it's tired? If I'm understanding you correctly, that's one of the worst takes on AI I think I've ever seen. The whole point of AI is to create an intelligence modeled on humans and to compare it to humans.

Most people who talk about AI have no idea what the psychological baseline is for humans. As a result their understand is poorly informed.

In this particular case, they evaluated models that do not have SOTA context window sizes. I.e. they have small working memory. The AIs are behaving exactly like human test takers with working memory, attention, and impulsivity constraints [0].

Their conclusion -- that we need to defend against adversarial perturbations -- is obvious, I don't see anyone taking the opposite view, and I don't see how this really moves the needle. If you can MITM the chat there's a lot of harm you can do.

This isn't like some major new attack. Science.org covered it along with peacocks being lasers because it's it's lightweight fun stuff for their daily roundup. People like talking about cats on the internet.

[0] for example, this blog post https://statmedlearning.com/navigating-adhd-and-test-taking-...

orbital-decay 4 days ago | parent | next [-]

>The whole point of AI is to create an intelligence modeled on humans and to compare it to humans.

According to who? Everyone who's anyone is trying to create highly autonomous systems that do useful work. That's completely unrelated to modeling them on humans or comparing them to humans.

saurik 3 days ago | parent | next [-]

But since these things are more like humans than computers, to build these autonomous systems you are going to have think in terms of full industrial engineering, not just software engineering: pretend you are dealing with a surprisingly bright and yet ever distracted employee who doesn't really care about their job and ensure that they are able to provide the structure you place them in value without danger to your process, instead of trying to pretend like the LLM is some kind of component which has any hope of ever having the kind of reliability of a piece of software. Organizations of humans can do amazing things, despite being extremely flawed beings, and figuring out how to use these LLMs to accomplish similar things is going to involve more of the skills of a manager than a developer.

somenameforme 3 days ago | parent | next [-]

Their output is in natural language, that's about the end of similarities with humans. They're token prediction algorithms, nothing more and nothing less. This can achieve some absolutely remarkable output, probably because our languages (both formal and linguistic) are absurdly redundant. But the next token being a word, instead of e.g. a ticker price, doesn't suddenly make them more like humans than computers.

nisegami 3 days ago | parent [-]

I see this "next token predictor" description being used as a justification for drawing a distinction between LLMs and human intelligence. While I agree with that description of LLMs, I think the concept of "next token predictor" is much, much closer to describing human intelligence than most people consider.

somenameforme 3 days ago | parent [-]

Humans invented language, from nothing. For that matter we went from a collective knowledge not far beyond 'stab them with the pokey end' to putting a man on the Moon. And we did it the blink of an eye if you consider how inefficient we are at retaining and conferring knowledge over time. Have an LLM start from the same basis humanity did and it will never produce anything, because the next token to get from [nothing] to [man on the Moon] simply does not exist for an LLM until we add it to its training base.

flir 3 days ago | parent | prev | next [-]

It's got an instant-messaging interface.

If it had an autocomplete interface, you wouldn't be claiming that. Yet it would still be the same model.

(Nobody's arguing that Google Autocomplete is more human than software - at least, I hope they're not).

3 days ago | parent | prev [-]
[deleted]
dotancohen 3 days ago | parent | prev | next [-]

By whoever coined the term Artificial Intelligence. It's right there in the name.

Backronym it to Advanced Inference and the argument goes away.

ants_everywhere 3 days ago | parent | prev | next [-]

Go back and look at the history of AI, including current papers from the most advanced research teams.

Nearly every component is based on humans

- neural net

- long/short term memory

- attention

- reasoning

- activation function

- learning

- hallucination

- evolutionary algorithm

If you're just consuming an AI to build a React app then you don't have to care. If you are building an artificial intelligence then in practice everyone who's anyone is very deliberately modeling it on humans.

orbital-decay 3 days ago | parent | next [-]

How far back do I have to look, and what definition do you use? Because I can start with theorem provers and chess engines of the 1950s.

Nothing in that list is based on humans, even remotely. Only neural networks were a vague form of biomimicry early on and currently have academic biomimicry approaches, of which all suck because they poorly map to available semiconductor manufacturing processes. Attention is misleadingly called that, reasoning is ill-defined, etc.

LLMs are trained on human-produced data, and ML in general shares many fundamentals and emergent phenomena with biological learning (a lot more than some people talking about "token predictors" realize). That's it. Producing artificial humans or imitating real ones was never the goal nor the point. We can split hairs all day long, but the point of AI as a field since 1950s is to produce systems that do something that is considered only doable by humans.

ants_everywhere 3 days ago | parent [-]

> How far back do I have to look

The earliest reference I know off the top of my head is Aristotle, which would be the 4th century BCE

> I can start with theorem provers

If you're going to talk about theorem provers, you may want to include the medieval theory of obligations and their game-semantic-like nature. Or the Socratic notion of a dialogue in which arguments are arrived at via a back and forth. Or you may want to consider that "logos" from which we get logic means "word". And if you contemplate these things for a minute or two you'll realize that logic since ancient times has been a model of speech and often specifically of speaking with another human. It's a way of having words (and later written symbols) constrain thought to increase the signal to noise ratio.

Chess is another kind of game played between two people. In this case it's a war game, but that seems not so essential. The essential thing is that chess is a game and games are relatively constrained forms of reasoning. They're modeling a human activity.

By 1950, Alan Turing had already written about the imitation game (or Turing test) that evaluated whether a computer could be said to be thinking based on its ability to hold a natural language conversation with humans. He also built an early chess system and was explicitly thinking about artificial intelligence as a model of what humans could do.

> Attention is misleadingly called that, reasoning is ill-defined,

None of this dismissiveness bears on the point. If you want to argue that humans are not the benchmark and model of intelligence (which frankly I think is a completely indefensible position, but that's up to you) then you have to argue that these things were not named or modeled after human activities. It's not sufficient that you think their names are poorly chosen.

> Producing artificial humans or imitating real ones was never the goal nor the point.

Artificial humans is exactly the concept of androids or humanoid robots. You are claiming that nobody has ever wanted to make humanoid robots? I'm sure you can't believe that but I'm at a loss for what point you're trying to make.

> 1950s is to produce systems that do something that is considered only doable by humans.

Unless this is a typo and you meant to write that this was NOT the goal, then you're conceding my point that humans are the benchmark and model for AI systems. They are, after all, the most intelligent beings we know to exist at present.

And so to reiterate my original point, talking about AI with the constraint that you can't compare them to humans is totally insane.

portaouflop 3 days ago | parent [-]

You can compare them to humans but it’s kind of boring. Maybe more interesting if you are an “ai” researcher

janalsncm 3 days ago | parent | prev | next [-]

Those terms sound similar to biological concepts but they’re very different.

Neural networks are not like brains. They don’t grow new neurons. A “neuron” in an artificial neural net is represented with a single floating point number. Sometimes even quantized down to a 4 bit int. Their degrees of freedom are highly limited compared to a brain. Most importantly, the brain does not do back propagation like an ANN does.

LSTMs have about as much to do with brain memory as RAM does.

Attention is a specific mathematical operation applied to matrices.

Activation functions are interesting because originally they were more biologically inspired and people used sigmoid. Now people tend to use simpler ones like ReLU or its leaky cousin. Turns out what’s important is creating nonlinearities.

Hallucinations in LLMs have to do with the fact that they’re statistical models not grounded in reality.

Evolutionary algorithms, I will give you that one although they’re way less common than backprop.

akoboldfrying 3 days ago | parent | next [-]

Neural networks are a lot like brains. That they don't generally grow new neurons is something that (a) could be changed with a few lines of code and (b) seems like an insignificant detail anyway.

> the brain does not do back propagation

Do we know this? Ruling this out is tantamount to claiming that we know how brains do learn. My suspicion is that we don't currently know, and that it will turn out that, e.g., sleep does something that is a coarse approximation of backprop.

wizzwizz4 3 days ago | parent | next [-]

No, we're pretty sure brains don't do backprop. See e.g. https://doi.org/10.1038/s41598-018-35221-w

akoboldfrying 3 days ago | parent [-]

Do we know that backprop is disjoint from variational free energy minimisation? Or could it be that one is an approximation to or special case of the other? I Ctrl-F'd "backprop" and found nothing, so I think they aren't compared in the paper, but maybe this is common knowledge in the field.

wizzwizz4 3 days ago | parent [-]

Yeah: and people have made comparisons (which I can't find right now). Free energy minimisation works better for some ML tasks (better fit on less data, with less overfitting) but is computationally-expensive to simulate in digital software. (Quite cheap in a physical model, though: I might recall, or might have made up, that you can build such a system with water.)

daveguy 3 days ago | parent | prev [-]

Neural networks are barely superficially like brains in that they are both composed of multiple functional units. That is the extent of the similarity.

ants_everywhere 3 days ago | parent | prev [-]

Neural networks are explicitly modeled on brains.

I don't know where this idea that "the things haves similar names but they're unrelated" trope is coming from. But it's not from people who know what they're talking about.

Like I said, go back and read the research. Look at where it was done. Look at the title of Marvin Minksy's thesis. Look at the research on connectionism from the 40s.

I would wager that every major paper about neuroscience from 1899 to 2020 or so has been thoroughly mined by the AI community for ideas.

janalsncm 3 days ago | parent [-]

You keep saying people who disagree with you don’t know what they’re talking about. I build neural networks for a living. I’m not creating brains.

Just because a plane is named a F/A-18 Hornet doesn’t mean it shares flight mechanisms with an insect.

Artificial neural nets are very different from brains but in practice are very different, for the reasons I mentioned above, but also for the reason that no one is trying to build a brain, they are trying to predict clicks or recommend videos etc.

There is software which does attempt to model brains explicitly. So far we haven’t simulated anything more complex than a fly.

root_axis 3 days ago | parent | prev | next [-]

You're anthropomorphizing terms of art.

littlestymaar 3 days ago | parent | prev | next [-]

Just because something is named after the name of a biological concept doesn't mean it has anything to do with the original thing the name was taken from.

ants_everywhere 3 days ago | parent | next [-]

Name collisions are possible, but in these cases the terms are explicitly modeled on the biological concepts.

littlestymaar 3 days ago | parent [-]

It's not name “collision”, they took a biological name that somehow felt apt for what they where doing.

To continue oblios's analogy, when you use the “hibernation mode” of your OS, it only has superficial similarity with how manals hibernate during winter…

ants_everywhere 3 days ago | parent [-]

[flagged]

littlestymaar 3 days ago | parent [-]

Well, your statements show that you are much less informed than you believe.

ants_everywhere 3 days ago | parent | next [-]

As I've said, go look at the literature.

If you find in the literature any evidence for your theory that the people who invented these concepts believed they had no relationship to the biological concepts then please contribute it.

If you're unwilling to do the bare minimum required to take the opposite of my position then you aren't really defending a point of view you're just gainsaying reflexively.

littlestymaar 3 days ago | parent [-]

Flexing about literature you haven't read is poor taste, especially since you are misrepresenting it.

ants_everywhere 3 days ago | parent [-]

I have read it, I'm not the least bit misrepresenting it.

Since you're just resorting to deliberately lying about things I don't see a reason to pursue this further.

littlestymaar 2 days ago | parent [-]

Why would anyone read the original perceptron paper in this century though? It's easy to know someone is bullshitting when they claim to have read 70 years old papers that aren't in themselves of any interest nowadays. (Like how many economists are quoting David Ricardo without having read it directly, because reading Ricardo is a very poor way of spending your time and energy).

Funny how you spent lots of time in this thread talking shit to professionals of the fields and then take offence when someone call you like the fool you are.

3 days ago | parent | prev [-]
[deleted]
oblio 3 days ago | parent | prev [-]

Whoa, hold it right there!

Next you'll tell me that Windows Hibernate and Bear® Hibernate™ have nothing in common?

Sharlin 3 days ago | parent | prev [-]

What your examples show is that humans like to repurpose existing words to refer to new things based on generalizations or vague analogies. Not much more than that.

squidbeak 3 days ago | parent | prev [-]

What do you imagine the purpose of these models' development is if not to rival or exceed human capabilities?

senthe 3 days ago | parent | prev | next [-]

> The whole point of AI is to create an intelligence modeled on humans and to compare it to humans.

This is like saying the whole point of aeronautics is to create machines that fly like birds and compare them to how birds fly. Birds might have been the inspiration at some point, but learned how to build flying machines that are not bird-like.

In AI, there *are* people trying to create human-like intelligence but the bulk of the field is basically "statistical analysis at scale". LLMs, for example, just predict the most likely next word given a sequence of words. Researchers in this area are trying to make this predictions more accurate, faster and less computationally- and data- intensive. They are not trying to make the workings of LLMs more human-like.

Der_Einzige 3 days ago | parent | prev [-]

I mean the critique of this on the idea that the AI system itself gets physically tired - specifically the homoculus that we tricked into existence is tired - is funny to imagine.