▲ | ants_everywhere 3 days ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Go back and look at the history of AI, including current papers from the most advanced research teams. Nearly every component is based on humans - neural net - long/short term memory - attention - reasoning - activation function - learning - hallucination - evolutionary algorithm If you're just consuming an AI to build a React app then you don't have to care. If you are building an artificial intelligence then in practice everyone who's anyone is very deliberately modeling it on humans. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | orbital-decay 3 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
How far back do I have to look, and what definition do you use? Because I can start with theorem provers and chess engines of the 1950s. Nothing in that list is based on humans, even remotely. Only neural networks were a vague form of biomimicry early on and currently have academic biomimicry approaches, of which all suck because they poorly map to available semiconductor manufacturing processes. Attention is misleadingly called that, reasoning is ill-defined, etc. LLMs are trained on human-produced data, and ML in general shares many fundamentals and emergent phenomena with biological learning (a lot more than some people talking about "token predictors" realize). That's it. Producing artificial humans or imitating real ones was never the goal nor the point. We can split hairs all day long, but the point of AI as a field since 1950s is to produce systems that do something that is considered only doable by humans. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | janalsncm 3 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Those terms sound similar to biological concepts but they’re very different. Neural networks are not like brains. They don’t grow new neurons. A “neuron” in an artificial neural net is represented with a single floating point number. Sometimes even quantized down to a 4 bit int. Their degrees of freedom are highly limited compared to a brain. Most importantly, the brain does not do back propagation like an ANN does. LSTMs have about as much to do with brain memory as RAM does. Attention is a specific mathematical operation applied to matrices. Activation functions are interesting because originally they were more biologically inspired and people used sigmoid. Now people tend to use simpler ones like ReLU or its leaky cousin. Turns out what’s important is creating nonlinearities. Hallucinations in LLMs have to do with the fact that they’re statistical models not grounded in reality. Evolutionary algorithms, I will give you that one although they’re way less common than backprop. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | root_axis 3 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
You're anthropomorphizing terms of art. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | littlestymaar 3 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Just because something is named after the name of a biological concept doesn't mean it has anything to do with the original thing the name was taken from. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | Sharlin 3 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
What your examples show is that humans like to repurpose existing words to refer to new things based on generalizations or vague analogies. Not much more than that. |