| ▲ | fragmede a day ago | |||||||
Classical computer vision is no more AI than quicksort or BFS is. What they say is ML is AI that works. But classic computer vision (CV) is hand rolled algorithms like Eigenfaces to detect faces or Mixture of Gaussians for background subtraction. There's no magic black box model in classic CV, no training on data, no generated pile of "if"s that no one knows how it works. Just linear algebra written and implemented by hand. Not AI, not even ML. | ||||||||
| ▲ | jrmg 20 hours ago | parent | next [-] | |||||||
ML, at least historically, has been considered a subset of AI, not a superset. Until the rise of LLMs recently using human-designed deterministic algorithms to perform ‘intelligent’ tasks (like image processing, and especially image recognition) has absolutely been considered AI. AI encompasses (encompassed?...) everything that uses computation to produce intelligence-like results. I fear the terminology battle has been lost, though, and nowadays most people consider at least neural networks - perhaps also non-determinism of output - to be a prerequisite for something being “AI” - which is actually _less_ meaningful to the end-user. | ||||||||
| ▲ | throwway120385 a day ago | parent | prev | next [-] | |||||||
You might say that the loss function is the human in the loop deciding whether or not the algorithm addresses the problem. | ||||||||
| ▲ | Sharlin 18 hours ago | parent | prev [-] | |||||||
You’re moving the goalposts, which is exactly what I referred to. Search algorithms and pathfinding have absolutely been AI historically, just go take a look at the table of contents of Norvig’s AI:MA. And I mean the 4rd edition that was published in 2020. A good 90% of the book is classical algorithmics. It’s pretty hilarious and history-blind to claim that AI is just 2015+ era deep neural magic black boxes or something, as if the field wasn’t invented until then. As if neural networks themselves hadn’t been tried several times at that point and found okay for classification tasks but not much more. As if for a long time, most AI researchers didn’t even want to talk about neural networks because they feared that their "cool" factor takes focus away from real AI research, and because the last time NNs were a big deal it was followed by one of the AI winters of broken promises and dwindling budgets. | ||||||||
| ||||||||