▲ | sirwhinesalot 2 days ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Your article, unlike the bizarre desperate take from the poster above, is actually very good. We do not understand the features the neural net learns, that's 100% true (and really the whole point of them in the first place). For small image recognition models we can visualize them and get an intuition for what they are doing, but it doesn't really matter. For even smaller models, we can translate them to a classical AI model (like a mixed integer program as an example) and actually do various "queries" on the model itself to, e.g., learn that the network recognizes the number "8" by just checking 2 pixels in the image. None of this changes the fact that we know what these things are and how they work, because we built them. Any comparisons to our lack of knowledge of the human brain are ridiculous. LLMs are obviously not conscious, they don't even have real "state", they're an approximated pure function f(context: List<Token>) -> Token, that's run in a loop. The only valid alarmist take is that we're using black box algorithms to make decisions with serious real-world impact, but this is true of any black box algorithm, not just the latest and greatest ML models. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | dpoloncsak 2 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Its a complex adaptive system, right? Isn't that the whole idea? We know how each part of the system works by itself. We know all inputs, and can measure outputs. I still (even if I actually understood the math) cannot tell you 'If you prompt 'x', the model will return 'y' with 100% confidence. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | ninetyninenine 2 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
https://youtu.be/qrvK_KuIeJk?t=284 I don’t appreciate your comments. Especially rude to call me desperate and bizarre. Take a look at the above video where Geoffrey Hinton basically the god father of AI directly contradicts your statement. I sincerely hope you self reflect and are able to realize that you’re the one completely out of it. Realistically the differences get down to sort of a semantic issue. We both agree that there are things we don’t understand and things that we do understand. It’s just the overall aggregate generalization of this in your opinion comes down to: “we overall do understand” and mine is “we don’t understand shit” Again. Your aggregate is wrong. Utterly. Preeminent Experts are on my side. If we did understand LLMs we’d be able to edit the individual weights of each neuron to remove hallucinations. But we can’t. Like literally we know a solution to the hallucination problem exists. It’s in the weights. We know a certain configuration of weights can remove the hallucination. But even for a single prompt and answer pair we do not know how to modify the weights such that the hallucinations go away. We can’t even quantify, formally define or model what an hallucination is. We describe LLMs in human terms and we manipulate the thing through prompts and vague psychological methods like “chain of thought”. You think planes can work like this? Do we psychologically influence planes to sort of fly correctly? Literally. No other engineered system like this exists on earth where sheer lack of understanding is this large. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|