| ▲ | e1g 11 hours ago |
| The prime mover behind this project is Daniel Kokotajlo, an ex-OpenAI researcher who documented his last predictions in 2021 [1], and much of that essay turned out to be nearly prophetic. Scott Alexander is a psychiatrist, but more relevant is that he dedicated the last decade to thinking and writing about societal forces, which is useful when forecasting AI. Other contributors are professional AI researchers and forecasters. [1] https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-... |
|
| ▲ | amarcheschi 10 hours ago | parent | next [-] |
| I don't understand why someone who is not a researcher (Scott and other authors) into that academic field should be taken into consideration, I don't care what he dedicated to, I care what the scientific consensus is. I mean, there are other researchers - actual ones, in academia - complaining a lot about this article, such as Timnit Gebru. I know, it's a repeat of my submissions of the last days, but it's hard to not feel like these people are making their own cult |
| |
| ▲ | eagleislandsong 9 hours ago | parent [-] | | Personally I think Scott Alexander is overrated. His writing style is extraordinarily verbose, which lends itself well to argumentative sleights of hand that make his ideas come across as much more substantive than they really are. | | |
| ▲ | amarcheschi 9 hours ago | parent [-] | | Verbose? Only that? That guy had done a meta review of ivermectin or similar things that would make anybody think that's a bad idea but no, apparently he's so well versed he can talk about ai and ivermectin all at once i also wonder why he had to defend such a medicine heavily talked about one side of the political spectrum... Then you read some extracts of the outgroup and you see "oh i'm just at a cafe with a nazi sympathizer" (/s but not too much) [1] [1] https://www.eruditorumpress.com/blog/the-beigeness-or-how-to... | | |
| ▲ | refulgentis 3 hours ago | parent | next [-] | | This is so damn good, thanks for sharing. I've gotten some really really good links from HN the last month that I never would have guessed existed and are exactly the intellectual argument I'm missing for some of my extreme distastes.* I gotta get widen outside Twitter. * Few things tell me more than when someone invoke "grey tribe" as an imaginary group of 3rd people who, of course, think ponderously and have the correct conclusions, unlike all those other people with motivated thinking. | | | |
| ▲ | eagleislandsong 9 hours ago | parent | prev [-] | | I stopped reading him ~10 years ago, so I didn't keep up with what he wrote about ivermectin. Thanks for sharing that blog post. I think it illustrates very well what I meant by employing argumentative sleights of hand to hide hollow ideas. | | |
| ▲ | amarcheschi 9 hours ago | parent [-] | | And they call themselves rationalist but still believe low quality studies about iq (which of course find whites to be higher iq than other ethnicities). the more you dig deep, the more it's the old classism, racism, ableism, misoginy, dressed in a shiny techbro coat. No surprise musk and thiel like them |
|
|
|
|
|
| ▲ | refulgentis 11 hours ago | parent | prev [-] |
| Oh my. I had no idea until now, that was exactly the same flavor and apparently, this is no coincidence. I'm not sure it was prophetic, it was a good survey of the field, but the claim was...a plot of grade schooler to PhD against year. I'm glad he got a paycheck from OpenAI at one point in time. I got one from Google in one point in time. Both of these projects are puffery, not scientific claims of anything, or claims of anything at all other than "at timestamp N+1, AI will be better than timestamp N, on an exponential curve" Utterly bog-standard boring claim going back to 2016 AFAIK. Not the product of considered expertise. Not prophetic. |
| |
| ▲ | amarcheschi 10 hours ago | parent [-] | | Furthermore, there were so many predictions by everyone - especially people with a vested interest for VC to make them flow money in - that something has to be true. Since the people on less wrong like bayesian statistics, the probability of having someone says the right thing given the assumption that there a shitton of people saying different things is... Not surprisingly, high |
|