| ▲ | JimmyBuckets 10 hours ago | |||||||||||||
I respect Ilya hugely as a researcher in ML and quite admire his overall humility, but I have to say I cringed quite a bit at the start of this interview when he talks about emotions, their relative complexity, and origin. Emotion is so complex, even taking all the systems in the body that it interacts with. And many mammals have very intricate socio-emotional lives - take Orcas or Elephants. There is an arrogance I have seen that is typical of ML (having worked in the field) that makes its members too comfortable trodding into adjacent intellectual fields they should have more respect and reverence for. Anyone else notice this? It's something physicists are often accused of also. | ||||||||||||||
| ▲ | fidotron 9 hours ago | parent | next [-] | |||||||||||||
Many ML people treat other devs that way as well. This is a major reason the ML field has to rediscover things like the application of quaternions to poses because they didn't think to check how existing practitioners did it, and even if they did clearly they'd have a better idea. Their enthusiasm for shorter floats/fixed point is another fine example. Not all ML people are like this though. | ||||||||||||||
| ▲ | fumeux_fume 10 hours ago | parent | prev | next [-] | |||||||||||||
Yeah, that's bothered me as well. Andrej Karpathy does this all the time when he talks about the human brain and making analogies to LLMs. He makes speculative statements about how the human brain works as though it's established fact. | ||||||||||||||
| ||||||||||||||
| ▲ | ilaksh 8 hours ago | parent | prev | next [-] | |||||||||||||
The question of how emotions function and how they might be related to value functions is absolutely central to that discussion and very relevant to his field. Doing fundamental AI research definitely involves adjacent fields like neurobiology etc. Re: the discussion, emotions actually often involve high level cognition -- it's just subconscious. Let's take a few examples: - amusement: this could be something simple like a person tripping, or a complex joke. - anger: can arise from something quite immediate like someone punching you, or a complex social situation where you are subtly being manipulated. But in many cases, what induces the emotion is a complex situation that involves abstract cognition. The physical response is primitive, and you don't notice the cognition because it is subconscious, but a lot may be going into the trigger for the emotion. | ||||||||||||||
| ▲ | el_jay 8 hours ago | parent | prev | next [-] | |||||||||||||
ML and physics share a belief in the power of their universal abstractions - all is dynamics in spaces at scales, all is models and data. The belief is justified because the abstractions work for a big array of problems, to a number of decimal places. Get good enough at solving problems with those universal abstractions, everything starts to look like a solvable problem and it gets easy to lose epistemic humility. You can combine physics and ML to make large reusable orbital rockets that land themselves. Why shouldn’t be able to solve any of the sometimes much tamer-looking problems they fail to? Even today there was an IEEE article about high failure rates in IT projects… | ||||||||||||||
| ▲ | Miraste 10 hours ago | parent | prev | next [-] | |||||||||||||
It is arrogant, but I see why it happens with brain-related fields specifically: the best scientific answer to most questions of intelligence and consciousness tends to be "we have no idea, but here's a bad heuristic." | ||||||||||||||
| ▲ | jstummbillig 10 hours ago | parent | prev | next [-] | |||||||||||||
It seems plausible that good AI researchers simply need to be fairly generalist in their thinking, at the cost of being less correct. Both neural networks and reinforcement learning may be crude but useful adoptions. A thought does not have to be correct. It just has to be useful. | ||||||||||||||
| ▲ | dmix 10 hours ago | parent | prev | next [-] | |||||||||||||
Ilya also said AI may already be "slightly conscious" in 2022 | ||||||||||||||
| ||||||||||||||
| ▲ | jb_rad 10 hours ago | parent | prev | next [-] | |||||||||||||
I think smart people across all domains fall for the trap of being overconfident in their ability to reason outside of their area of expertise. I admire those who don't, but alas we are human. | ||||||||||||||
| ▲ | 10 hours ago | parent | prev | next [-] | |||||||||||||
| [deleted] | ||||||||||||||
| ▲ | AstroBen 9 hours ago | parent | prev | next [-] | |||||||||||||
What's wrong with putting your current level of knowledge out there? Inevitably someone who knows more will correct you, or show you're wrong, and you've learnt something The only thing that would make me cringe is if he started arguing he's absolutely right against an expert in something he has limited experience in It's up to listeners not to weight his ideas too heavily if they stray too far from his specialty | ||||||||||||||
| ▲ | 9 hours ago | parent | prev | next [-] | |||||||||||||
| [deleted] | ||||||||||||||
| ▲ | rafaelero 8 hours ago | parent | prev | next [-] | |||||||||||||
The equivalence of emotions to reward functions seem pretty obvious to me. Emotions are what compel us to act in the environment. | ||||||||||||||
| ▲ | slashdave 10 hours ago | parent | prev | next [-] | |||||||||||||
> It's something physicists are often accused of also. Nah. Physics is hyper-specialized. Every good physicist respects specialists. | ||||||||||||||
| ▲ | NalNezumi 9 hours ago | parent | prev | next [-] | |||||||||||||
>There is an arrogance I have seen that is typical of ML (having worked in the field) that makes its members too comfortable trodding into adjacent intellectual fields they should have more respect and reverence for. I've not only noticed it but had to live with it a lot as a robotics guy interacting with ML folks both in research and tech startups. I've heard essentially same reviews of ML practitioners in any research field that is "ML applied to X" and X being anything from medical to social science. But honestly I see the same arrogance in software world people too, and hence a lot here in HN. My theory is that, ML/CS is an entire field around made-for-human logic machine and what we can do with it. Which is very different from anything real (natural) science or engineering where the system you interact with is natural Laws, which are hard and not made to be easy to understand or made for us, unlike programming for example. When you sit in a field when feedback is instant (debuggers/bug msg), and you deep down know the issues at hand is man-made, it gives a sense of control rarely afforded in any other technical field. I think your worldview get bent by it. CS folk being basically the 90s finance bro yuppies of our time (making a lot of money for doing relatively little) + lack of social skills making it hard to distinguish arrogance and competence probably affects this further. ML folks are just the newest iteration of CS folks. | ||||||||||||||
| ▲ | stevenhuang 8 hours ago | parent | prev | next [-] | |||||||||||||
It is not arrogance. It's awareness of the physical church turing thesis. If it turns out everything is fundamentally informational, then the exact complexity (of emotion or consciousness even, which I'm sure is very complex) is irrelevant; it would still mean it's turing representable and thus computable. It may very well turn out not to be the case, which on it's own will be interesting as that suggests we live in a dualist reality. | ||||||||||||||
| ▲ | 10 hours ago | parent | prev | next [-] | |||||||||||||
| [deleted] | ||||||||||||||
| ▲ | mips_avatar 8 hours ago | parent | prev [-] | |||||||||||||
I think the bigger problem is he refused to talk about what he's working on! I would love to hear his view on how we're going to move past evals and RL, but he flat out said it's proprietary and won't talk about it. | ||||||||||||||