Remix.run Logo
dmead 4 days ago

Is there anything you can tell me that will help me drop the nagging feeling that gradient descent trained models will just never be good?

I understand all of what you said, but I can't get over that fact that the term AI is being used for these architectures. It seems like the industry is just trying to do a cool parlor trick in convincing the masses this is somehow AI from science fiction.

Maybe I'm being overly cynical, but a lot of this stinks.

atleastoptimal 4 days ago | parent | next [-]

The thing is AI is already "good" for a lot of things. It all depends on your definition of "good" and what you require of an AI model.

It can do a lot of things that are generally very effective. High reliability semantic parsing from images is just one thing that modern LLM's are very reliable at.

dmead 4 days ago | parent [-]

You're right. I use it for api documentation and showing use cases, especially in languages i don't use often.

but this other attribution people are doing- that it's going to achieve (the marketing term) AGI and everything will be awesome is clearly bullshit.

Zacharias030 4 days ago | parent | prev | next [-]

Wouldn’t you say that now, finally, what people call AI combines subsymbolic systems („gradient descent“) with search and with symbolic systems (tool calls)?

I had a professor in AI who was only working on symbolic systems such as SAT-solvers, Prolog etc. and the combination of things seems really promising.

Oh, and what would be really nice is another level of memory or fast learning ability that goes beyond burning in knowledge through training alone.

dmead 4 days ago | parent [-]

I had such a professor as well, but those people used to use the more accurate term "machine learning".

There was also wide understanding that those architectures were trying to imitate small bits of what we understood was happening in the brain (see marvin minsky's perceptron etc). The hope was, as I understood it that there would be some breakthrough in neuroscience that would let the computer scientists pick up the torch and simulate what we find in nature.

None of that seems to be happening anymore and we're just interested in training enough to fool people.

"AI" companies investing in brain science would convince me otherwise. At this point they're just trying to come up with the next money printing machine.

app134 4 days ago | parent [-]

You asked earlier if you were being overly cynical, and I think the answer to that is "yes"

We are indeed simulating what we find in nature when we create neural networks and transformers, and AI companies are indeed investing heavily in BCI research. ChatGPT can write an original essay better than most of my students. Its also artificial. Is that not artificial intelligence?

dmead 4 days ago | parent [-]

It is not intelligent.

Hiding the training data behind gradient descent and then making attributions to the program that responds using this model is certainly artificial though.

This analogy just isn't holding water.

tim333 4 days ago | parent [-]

Can't you judge on the results though rather than saying AI isn't intelligent because it uses gradient descent and biology is intelligent because it uses wet neurons?

Zacharias030 28 minutes ago | parent [-]

I strongly believe that our concept of intelligence is like the „god of the gaps“ [0]. Intelligent is only what we haven’t yet explained.

Chess computers surely must be intelligent, but then deep blue was „just search“.

Go computers surely must have intelligence because it requires intuition and search is intractable, but then it’s „just CNN based pattern matching“.

Writing essays surely requires intelligence, because of the creativity, but then it‘s actually just a „stochastic parrot“.

We keep attributing intelligence to what is currently out of reach even as this set is rapidly shrinking before out eyes.

It would be better to say that intelligence is an emergent phenomenon and that behavior that seems intelligent is intelligent.

[0] https://en.m.wikipedia.org/wiki/God_of_the_gaps

int_19h 3 days ago | parent | prev [-]

> It seems like the industry is just trying to do a cool parlor trick in convincing the masses this is somehow AI from science fiction.

If you gave a random sci-fi writer from 1960s access to Claude, I'm fairly sure they wouldn't have any doubts over whether it is AI or not. They might argue about philosophical matters like whether it has a "soul" etc (there's plenty of that in sci-fi), but that is a separate debate.