| ▲ | dan-robertson 15 hours ago |
| Why does being a top AI researcher so often come with this philosophical bent you describe? |
|
| ▲ | ladberg 15 hours ago | parent | next [-] |
| You are paying the smartest people in the world to think really really hard, and turns out they might also think really really hard about not making the world a worse place |
| |
| ▲ | asddubs 15 hours ago | parent | next [-] | | it's not working | | | |
| ▲ | bdangubic 15 hours ago | parent | prev | next [-] | | Is this really the case though? How many smartest people do you really think are there that fit this narrative?! I want to believe there are at least some but I think they are minority in this group… otherwise I think all these pretty much evil corporations would have a awfully difficult time attracting talent? maybe some do but… | | |
| ▲ | saagarjha 15 hours ago | parent [-] | | Most evil corporations have fairly normal jobs available. | | |
| ▲ | bdangubic 14 hours ago | parent [-] | | if you want to make the world a better place as OP stated perhaps you can get a normal job in maybe less evil corp? | | |
| ▲ | saagarjha 14 hours ago | parent | next [-] | | Most companies are evil in some way, the question is how evil and how close you are to the evil. Most people will pick "not that evil but pays a lot". A few will take "pretty evil and pays more than a lot". Some will choose "less evil and pays poorly". (It's worth noting that there are a lot of jobs that are not at the Pareto frontier and are "more evil and pay worse" but social mobility etc. cause them to be selected anyway). | |
| ▲ | munificent 14 hours ago | parent | prev [-] | | When presented with a choice between: 1. Take a job making $$$$$$$ at a company making the world worse. 2. Take a job making $$$ at a company not making the world worse. Very few people have a personality such that they'll pick 2. | | |
| ▲ | 13 hours ago | parent | next [-] | | [deleted] | |
| ▲ | bdangubic 12 hours ago | parent | prev [-] | | exactly what I was asking OP, her/his comment sounded like people will pick the later (I agree with you) |
|
|
|
| |
| ▲ | jasonfarnon 10 hours ago | parent | prev | next [-] | | not really. 15-20 years ago that same upper echelon of college/professional school graduates you're describing were going into finance. | |
| ▲ | watwut 15 hours ago | parent | prev [-] | | Except they do? They are certainly not making it better place. Like, ok, it is money for few companies and salary, it is business and probably fun work. But it is absurd to claim it is "making the world better place". | | |
| ▲ | metalcrow 9 hours ago | parent [-] | | I'm not sure you can provide an objective (i.e way to show that it is absurd) means of explaining how an AI researcher is making the world a worse place. It's going to come down to disagreeing about some axiom like "is ASI rapidly approaching" or "Is AGI good to have" and there's no right answer to those. |
|
|
|
| ▲ | mynameisash 15 hours ago | parent | prev | next [-] |
| I would think it's because of the staggering money they're making. According to Fortune[0]: > Altman said on an episode of Uncapped that Meta had been making “giant offers to a lot of people on our team,” some totaling “$100 million signing bonuses and more than that [in] compensation per year.” > Deedy Das, a VC at Menlo Ventures, previously told Fortune that he has heard from several people the Meta CEO has tried to recruit. “Zuck had phone calls with potential hires trying to convince them to join with a $2M/yr floor.” If you're making a minimum of $2M/year or even 50x that, you can afford to live according to your values instead of checking them at the door. [0] https://archive.ph/lBIyY |
| |
| ▲ | thereitgoes456 13 hours ago | parent [-] | | I see you're treating Sam Altman as some kind of trustworthy source. Might it be possible that he's making that up -- of course, nobody will ever call him on it! -- and exaggerating the numbers to make his company and team look really good and ethical for not accepting such lucrative offers, or perhaps to make them sour on Meta for not receiving $100M offers? |
|
|
| ▲ | tdb7893 14 hours ago | parent | prev | next [-] |
| My experience with researchers (though not in AI) is that it's a bunch of very opinionated nerds who are mostly motivated by loving a subject. My experience is that most people who think really deeply and care about what they do also care more that their work is prosocial. |
| |
| ▲ | Sl1mb0 14 hours ago | parent [-] | | > care more that their work is prosocial These takes are always so funny to me. The whole reason we even have the internet is because the US government needed a way for parties to be able to communicate in the event of nuclear fallout. The benefits that a technology provides is almost always secondary to their applications in warfare. Researchers can claim to care that their work is pro-social, and they may genuinely believe it; but let's not kid ourselves that that is actually the case. The development of technology is simply due to the reality of nations being in a constant arms race against one another. Even funnier is that researchers (people who are supposed to be really smart) either ignore or are blissfully unaware of this fact. When you take that into consideration, the pro-social argument falls on its face, and you're left with the reality that they do this to satiate their ego. | | |
| ▲ | compiler-guy 12 hours ago | parent | next [-] | | Although the Rand corporation did contribute some ideas theoretically connected to nuclear survivability (packet switching in particular). All that work was pre-ARPAnet and don’t really motivate the design in that way. It was designed to handle partial breaks and disconnections though. Wikipedia quotes Charles Herzfeld, ARPA Director at the time as below. And has much ore discussion as to why this belief is false. https://en.wikipedia.org/wiki/ARPANET ==== The ARPANET was not started to create a Command and Control System that would survive a nuclear attack, as many now claim. To build such a system was, clearly, a major military need, but it was not ARPA's mission to do this; in fact, we would have been severely criticized had we tried. Rather, the ARPANET came out of our frustration that there were only a limited number of large, powerful research computers in the country, and that many research investigators, who should have access to them, were geographically separated from them.[113] | |
| ▲ | tdb7893 13 hours ago | parent | prev [-] | | So researchers are going to be irrational and also often value other things more highly than prosociality but that doesn't really refute my point that they value it more highly than the average population. Also your example of a bad technology is something that allows people to still communicate in the event of nuclear war and that seems good! Not all technology related to war is bad (like basic communication or medical technologies) and also a huge amount of technology isn't for war. We've all worked in tech here, "The development of technology is simply due to the reality of nations being in a constant arms race against one another" just isn't true. I've at the very least developed new technologies meant to make rich assholes into slightly richer assholes. Technology is complex and motivations for it are equally so and won't fit into some trite saying. | | |
| ▲ | Sl1mb0 5 hours ago | parent [-] | | I never claimed any techology is good or bad; you also seem to be in agreement with me that technology used in warfare _can_ have "good" applications (I mentioned that the benefits are secondary to their applications in war, that doesn't sound like me saying there are no benefits). Lastly, the only point I was trying to make is that the argument that researchers do these things for "pro-social" causes is kind of a facade; the macro environment that incentivizes technological development *is* mostly due to government investment. Sure, the individuals working on it may all have different motivations, but they wouldn't be able to do so without large sums of money. The CIA [1] literally has a venture capital firm dedicated to the investing in the development of technology - do you really believe they are doing that to help people? - [1]: https://fortune.com/2025/07/29/in-q-tel-cia-venture-capital-... |
|
|
|
|
| ▲ | cloverich 14 hours ago | parent | prev | next [-] |
| This isnt unique to top AI researchers. Top talent has a long history of being averse to authoritarian/despotism at least in part because, by near definition, it must suppress truth. You cant build the future effectively with that approach. |
|
| ▲ | wombatpm 15 hours ago | parent | prev | next [-] |
| Because it is not Macrodata Refinement and you can’t stop them thinking off the clock. |
|
| ▲ | janalsncm 13 hours ago | parent | prev | next [-] |
| Aside from the Maslow’s hierarchy of needs points others are making, I believe it has something to do with the history of AI research. There is a big overlap between the “rationalist” and “effective altruist” crowds and some AI research ideas. At a minimum they come from the same philosophy: define an objective, and find methods to optimize that objective. For AI that’s minimizing loss functions with better and better models of the data. For EA, that’s allocating money in ways they think are expectation-maximizing. Note this doesn’t apply to everyone. Some people just want to make money. |
|
| ▲ | derektank 15 hours ago | parent | prev | next [-] |
| Because a lot of them are academics that are doctors of philosophy |
|
| ▲ | refulgentis 15 hours ago | parent | prev | next [-] |
| Maybe you’re reading “philosophical bent” as “armchair philosopher”, as in they are dabbling in a field unrelated to their profession and letting it drive their profession - worldview might have made it clearer? |
| |
| ▲ | lo_zamoyski 14 hours ago | parent [-] | | Indeed. Philosophically, I have not been impressed by the more vocal people associated with the field. They may not be representative - I think most do it for the money and it being hip. “Worldview” is a better term, but people are generally blind to the worldview they’ve tacitly absorbed, including academics. |
|
|
| ▲ | hermanzegerman 15 hours ago | parent | prev [-] |
| Because they can afford it, they are very sought after. And smart people usually have moral convictions. I know for some people on this website it's hard to understand, but not everything in life is about $$$ |
| |
| ▲ | 0x3f 15 hours ago | parent | next [-] | | > And smart people usually have moral convictions. Are you sure you don't just like the moral convictions and so engage in trait bundling? Moral knowledge doesn't really exist. I mean you can have personal views on it, but the lack of falsifiability makes me suspect it wouldn't be well-correlated with intelligence. Smarter people can discuss more layered or chic moral theories as they relate to theoretical AI, maybe. | | |
| ▲ | lo_zamoyski 14 hours ago | parent [-] | | > Moral knowledge doesn't really exist. If that is the case, then why should you or anyone prefer to believe your claim that moral knowledge doesn’t exist over the contrary? | | |
| ▲ | 0x3f 14 hours ago | parent [-] | | Different kinds of claims, it's not self-referential | | |
| ▲ | lo_zamoyski 13 hours ago | parent [-] | | > Different kinds of claims How so? If I claim that one should prefer the claim "moral knowledge doesn't exist" over its contrary, then I am making a moral claim. That would make it self-refuting. There is no fact-value dichotomy. And one more thing... > the lack of falsifiability Is falsifiability falsifiable? If all credible claims must be falsifiable, then where does that leave us with the criterion of falsifiability (which is problematic even part from this particular case, as anyone who has done any serious reading in the philosophy of science knows). |
|
|
| |
| ▲ | lelanthran 7 hours ago | parent | prev | next [-] | | > And smart people usually have moral convictions. Dumb people have moral convictions. Smart people see the nuance. | |
| ▲ | siva7 15 hours ago | parent | prev [-] | | I'm smart and you can buy my morals. So what? | | |
| ▲ | yoyohello13 14 hours ago | parent | next [-] | | True, many smart people will gladly (or even begrudgingly) do evil for money. That's why there is so much suffering in the world, because of people like you. | | |
| ▲ | 0x3f 14 hours ago | parent [-] | | Is ad tech and the like really causing so much suffering? The government work, mass surveilance, killing people etc. doesn't actually pay that much, typically. | | |
| ▲ | yoyohello13 14 hours ago | parent [-] | | I think ad tech is probably the single most destructive technology of the new millennium. The shift toward "engagement at all costs" business strategies is basically the root cause of societies current political polarization. Engagement bait cultivates fear and rage in the populace to get clicks. We are now seeing the consequences of shoving ads that sow fear, anger, doubt and inadequacy into peoples faces 24/7. This doesn't even touch on the fact that mass surveillance is only possible because of the technologies forged by the Ad tech industry. | | |
| ▲ | 0x3f 14 hours ago | parent [-] | | Well I'm not sure I entirely believe this myself, but it seems easy enough to argue that this is progress of a sort. The West assumes pure democracy as the final form of government that we are all convergently evolving towards. But if this form of government or society is not robust to the kinds of things you're talking about, should it not suffer the consequences and be adapted or flushed for our long-term betterment? It seems a bit like saying the French Revolution was the most destructive thing to happen in the history of France. Sure, in the short term. But it also paved the way for modern liberal democracy. | | |
| ▲ | yoyohello13 13 hours ago | parent [-] | | That’s fair enough. I wouldn’t say I’m happy about needing to live through interesting times, but if we make it out the other end maybe something better will come of it. |
|
|
|
| |
| ▲ | refulgentis 15 hours ago | parent | prev | next [-] | | So what, indeed (not sure what you mean) | |
| ▲ | hermanzegerman 15 hours ago | parent | prev [-] | | Those people get paid so much anyway that they don't have to compromise their morals. I guess that's not the case for you and me | | |
|
|