▲ | Zacharias030 4 days ago | ||||||||||||||||||||||||||||||||||
Wouldn’t you say that now, finally, what people call AI combines subsymbolic systems („gradient descent“) with search and with symbolic systems (tool calls)? I had a professor in AI who was only working on symbolic systems such as SAT-solvers, Prolog etc. and the combination of things seems really promising. Oh, and what would be really nice is another level of memory or fast learning ability that goes beyond burning in knowledge through training alone. | |||||||||||||||||||||||||||||||||||
▲ | dmead 4 days ago | parent [-] | ||||||||||||||||||||||||||||||||||
I had such a professor as well, but those people used to use the more accurate term "machine learning". There was also wide understanding that those architectures were trying to imitate small bits of what we understood was happening in the brain (see marvin minsky's perceptron etc). The hope was, as I understood it that there would be some breakthrough in neuroscience that would let the computer scientists pick up the torch and simulate what we find in nature. None of that seems to be happening anymore and we're just interested in training enough to fool people. "AI" companies investing in brain science would convince me otherwise. At this point they're just trying to come up with the next money printing machine. | |||||||||||||||||||||||||||||||||||
|