| ▲ | ineedasername 7 hours ago |
| The company he's worked for nearly a quarter century has enabled & driven more consumerist spend in all areas of the economy via behaviorally targeted optimized ad delivery, driving far more resources and power consumption by orders of magnitude compared to the projected increases of data centers over the coming years. This level of vitriol seems both misdirected and practically obtuse in lacking awareness of the part his work has played in far, far, far more expansive resource expenditure in service to work far less promising for overall advancement, in ad tech and algorithmic exploitation of human psychology for prolonged media engagement. |
|
| ▲ | ineedasername 6 hours ago | parent | next [-] |
| To expand on my comment wrt "promising for overall advancement": My daughter, in her math class: Her teacher- I'll reserve overall judgement on their teaching: she may be perfectly adequate as a teach for other students, which is part of my point- simply doesn't teach in the same sense other teachers do: present topic, leave details of "figuring out how to apply methods" to the students. Doesn't work for my daughter, who has never done less than excellent in math previously. She realized she ChatGPT (we monitor usage) for any way of explaining things that "simply worked" for how she can engage with explanations. Math has never been as easy for her, even more so than before, and her internalization of the material is achieving a near-intuitive understanding. Now consider: the above process is available and cheap to every person in the world with a web browser (we don't need to pay for her to have a plus account). If/when ChatGPT starts doing ridiculous intrusive ads, a simple Gemma 3 1b model will do nearly as good a job) This is faster and easier and available in more languages than anything else, ever, with respect to individual-user tailored customization simply by talking to the model. I don't care how many pointless messages get sent. This is more valuable than any single thing Google has done before, and I am grateful to Rob Pike for the part his work has played in bring it about. |
| |
| ▲ | jwr 6 hours ago | parent [-] | | Seconded — "AI" is a great teaching resource. All bigger models are great at explaining stuff and being good tutors, I'd say easily up to the second year of graduate studies. I use them regularly when working with my kid and I'm trying to teach them to use the technology, because it is truly like a bicycle for the mind. | | |
|
|
| ▲ | CerryuDu 6 hours ago | parent | prev | next [-] |
| Don't be ridiculous. Google has been doing many things, some of those even nearly good. The super talented/prolific/capable have always gravitated to powerful maecenases. (This applies to Haydn and Händel, too.) If you uncompromisingly filter potential employers by "purely a blessing for society", you'll never find an employment that is both gainful and a match for your exceptional talents. Pike didn't make a deal with the devil any more than Leslie Lamport or Simon Peyton Jones did (each of whom had worked for 20+ years at Microsoft, and has advanced the field immensely). As IT workers, we all have to prostitute ourselves to some extent. But there is a difference between Google, which is arguably a mixed bag, and the AI companies, which are unquestionably cancer. |
| |
| ▲ | arendtio 6 hours ago | parent | next [-] | | I am not so sure about 'the mixed bag' vs 'unquestionably cancer', but I think the problem is that he is complaining while working for such a company. | | |
| ▲ | eeeficus 6 hours ago | parent | next [-] | | Not a problem at all. I’m not sure why you feel the need to focus on all the un-interesting parts. The interesting parts are what he said and weather or not those are true. Not sure why is more important who said what, rather than what was said especially if this doesn’t add much to the original discussion… it just misdirects attention without a clear indication to the motive! | |
| ▲ | CerryuDu 6 hours ago | parent | prev [-] | | Others in the thread seem to be saying that he has retired (sort of) a few years ago. | | |
| ▲ | arendtio 6 hours ago | parent [-] | | Given his age, that sounds reasonable. | | |
| ▲ | GeorgeTirebiter 6 hours ago | parent [-] | | Are you saying that "age" is somehow a reason to retire? Most professionals I know who are able continue to work as they age, perhaps with a somewhat reduced work schedule. There's nothing I know of which keeps the mind sharp than the need to solve Real Problems. Figuring out which golf course to try, or which TV channel to choose -- those don't help too much to reduce cognitive decline. | | |
|
|
| |
| ▲ | iepathos 5 hours ago | parent | prev | next [-] | | > As IT workers, we all have to prostitute ourselves to some extent. No, we really don't. There are plenty of places to work that aren't morally compromised - non-profits, open source foundations, education, healthcare tech, small companies solving real problems. The "we all have to" framing is a convenient way to avoid examining your own choices. And it's telling that this framing always seems to appear when someone is defending their own employer. You've drawn a clear moral line between Google ("mixed bag") and AI companies ("unquestionably cancer") - so you clearly believe these distinctions matter even though Google itself is an AI company. | | |
| ▲ | CerryuDu 5 hours ago | parent [-] | | > non-profits I think those are pretty problematic. They can't pay well (no profits...), and/or they may be politically motivated such that working for them would mean a worse compromise. > open source foundations Those dreams end. (Speaking from experience.) > education, healthcare tech Not self-sustaining. These sectors are not self-sustaining anywhere, and therefore are highly tied to politics. > small companies solving real problems I've tried small companies. Not for me. In my experience, they lack internal cohesion and resources for one associate to effectively support another. > The "we all have to" framing is a convenient way to avoid examining your own choices. This is a great point to make in general (I take it very seriously), but it does not apply to me specifically. I've examined all the way to Mars and back. > And it's telling that this framing always seems to appear when someone is defending their own employer. (I may be misunderstanding you, but in any case: I've never worked for Google, and I don't have great feelings for them.) > You've drawn a clear moral line between Google ("mixed bag") and AI companies ("unquestionably cancer") I did! > so you clearly believe these distinctions matter even though Google itself is an AI company Yes, I do believe that. Google has created Docs, Drive, Mail, Search, Maps, Project Zero. It's not all terribly bad from them, there is some "only moderately bad", and even morsels of "borderline good". | | |
| ▲ | iepathos 4 hours ago | parent [-] | | Thanks for the thoughtful reply. The objections to non-profits, OSFs, education, healthcare, and small companies all boil down to: they don't pay enough or they're inconvenient. Those are valid personal reasons, but not moral justifications. You decided you wanted the money big tech delivers and are willing to exchange ethics for that. That's fine, but own it. It's not some inevitable prostitution everyone must do. Plenty of people make the other choice. The Google/AI distinction still doesn't hold. Anthropic and OpenAI also created products with clear utility. If Google gets "mixed bag" status because of Docs and Maps (products that exist largely just to feed their ad machine), why is AI "unquestionable cancer"? You're claiming Google's useful products excuse their harms, but AI companies' useful products don't. That's not a principled line, it's just where you've personally decided to draw it. | | |
| ▲ | CerryuDu 2 hours ago | parent [-] | | > The objections to non-profits, OSFs, education, healthcare, and small companies all boil down to: they don't pay enough or they're inconvenient. Those are valid personal reasons, but not moral justifications. You decided you wanted the money big tech delivers and are willing to exchange ethics for that. That's fine, but own it. I don't perceive it that way. In other words, I don't think I've had a choice there. Once you consider other folks that you are responsible for, and once you consider your own mental health / will to live, because those very much play into your availability to others (and because those other possible workplaces do impact mental health! I've tried some of them!), then "free choice of employer" inevitably emerges as illusory. It's way beyond mere "inconvenience". It absolutely ties into morals, and meaning of one's life. The universe is not responsible for providing me with employment that ensures all of: (a) financial safety/stability, (b) self-realization, (c) ethics. I'm responsible for searching the market for acceptable options, and shockingly, none seem to satisfy all three anymore. It might surprise you, but the trend for me has been easing up on both (a) and (c) (no mistake there), in order to gain territory on (b). It turns out that my mental health, my motivation to live and work are the most important resources for myself and for those around me. The fact has been a hard lesson that I've needed to trade not only money, but also a pinch of ethics, in order to find my place again. This is what I mean by "inevitable prostitution to an extent". It means you give up something unquestionably important for something even more important. And you're never unaware of it, you can't really find peace with it, but you've tried the opposite tradeoffs, and they are much worse. For example, if I tried to do something about healthcare or education in my country, that might easily max out the (b) and (c) dimensions simultaneously, but it would destroy my ability to sustain my family. (It's not about "big tech money" vs. "honest pay", but "middle-class income" vs. poverty.) And that question entirely falls into "morality": it's responsibility for others. > Anthropic and OpenAI also created products with clear utility. Extremely constrained utility. (I realize many people find their stuff useful. To me, they "improve" upon the wrong things, and worsen the actual bottlenecks.) > You're claiming Google's useful products excuse their harms, (mitigate, not excuse) > but AI companies' useful products don't. That's not a principled line, it's just where you've personally decided to draw it. First, it's obviously a value judgment! We're not talking theoretical principles here. It's the direct, rubber-meets-the-road impact I'm interested in. Second, Google is multi-dimensional. Some of their activity is inexcusably bad. Some of it is excusable, even "neat". I hate most of their stuff, but I can't deny that people I care about have benefited from some of their products. So, all Google does cannot be distilled into a single scalar. At the same time, pure AI companies are one-dimensional, and I assign them a pretty large magnitude negative value. |
|
|
| |
| ▲ | hnhn34 3 hours ago | parent | prev | next [-] | | > But there is a difference between Google, which is arguably a mixed bag, and the AI companies, which are unquestionably cancer Google's DeepMind has been at the forefront of AI research for the past 11+ years. Even before that, Google Brain was making incredible contributions to the field since 2011, only two years after the release of Go. OpenAI was founded in response to Google's AI dominance. The transformer architecture is a Google invention. It's not an exaggeration to claim Google is one of the main contributors to the insanely fast-paced advancements of LLMs. With all due respect, you need some insane mental gymnastics to claim AI companies are "unquestionably cancer" while an adtech/analytics borderline monopoly giant is merely a "mixed bag". | | |
| ▲ | CerryuDu 3 hours ago | parent [-] | | > you need some insane mental gymnastics Perhaps. I dislike google (have disliked it for many years with varying intensity), but they have done stuff where I've been compelled to say "neat". Hence "mixed bag". This "new breed of purely AI companies" -- if this term is acceptable -- has only ever elicited burning hatred from me. They easily surpass the "usual evils" of surveillance capitalism etc. They deceive humanity at a much deeper level. I don't necessarily blame LLMs as a technology. But how they are trained and made available is not only irresponsible -- it's the pinnacle of calculated evil. I do think their evil exceeds the traditional evils of Google, Facebook, etc. |
| |
| ▲ | ignoramous 6 hours ago | parent | prev | next [-] | | > Don't be ridiculous. OP says, it is jarring to them that Pike is as concerned with GenAI as he is, but didn't spare a thought for Google's other (in their opinion, bigger) misgivings, for well over a decade. Doesn't sound ridiculous to me. That said, I get that everyone's socio-political views change are different at different points in time, especially depending on their personal circumstances including family and wealth. | | |
| ▲ | CerryuDu 5 hours ago | parent [-] | | > didn't spare a thought for Google's other (in their opinion, bigger) misgivings, for well over a decade That's the main disagreement, I believe. I'm definitely not an indiscriminate fan of Google. I think Google has done some good, too, and the net output is "mostly bad, but with mitigating factors". I can't say the same about purely AI companies. |
| |
| ▲ | mempko 6 hours ago | parent | prev | next [-] | | Google published a post gloating on how much consumerism it increased. | |
| ▲ | doctorpangloss 6 hours ago | parent | prev [-] | | Okay, but the discourse Rob Pike is engaging in is, “all parts of an experience are valid,” so you see how he’s legitimately in a “hypocrisy pickle” | | |
| ▲ | CerryuDu 6 hours ago | parent [-] | | Can you elaborate on the "all parts of an experience are valid" part? I may be missing something. Thanks. | | |
|
|
|
| ▲ | kmoser 6 hours ago | parent | prev | next [-] |
| You're not wrong about the effects and magnitude of targeted ads but that doesn't preclude Pike from criticizing what he believes to be a different type of evil. |
| |
| ▲ | ineedasername 5 hours ago | parent [-] | | Sure, but it also doesn't preclude him from being wrong, or at least incomplete as expressed, about his work having the exact same resource-consuming impact when used for ad tech, or addition impact with toxic social media. |
|
|
| ▲ | xuhu 6 hours ago | parent | prev | next [-] |
| He worked on: Go, the Sawzall language for processing logs, and distributed systems. Go and Sawzall are usable and used outside Google. Are those distributed systems valuable primarily to Google, or are they related to Kubernetes et cetera ? |
| |
| ▲ | prepend 6 hours ago | parent [-] | | He was paid by Google with money made through Google’s shady practices. It’s like saying that it’s cool because you worked on some non-evil parts of a terrible company. I don’t think it’s right to work for an unethical company and then complain about others being unethical. I mean, of course you can, but words are hollow. |
|
|
| ▲ | gaws 6 hours ago | parent | prev | next [-] |
| He got his bag. He doesn't care anymore. |
|
| ▲ | overgard 6 hours ago | parent | prev | next [-] |
| Google is huge. Some of the things it does are great. Some of the things it does are terrible. I don't think working for them has to mean that you 100% agree with everything they do. |
| |
| ▲ | kmijyiyxfbklao 6 hours ago | parent [-] | | If it's "Who is worse Google or LLMs?", I think I'll say Google is worse. The biggest issue I see with LLMs is needing to pay a subscription to tech companies to be able to use them. | | |
| ▲ | ineedasername 5 hours ago | parent [-] | | You don't even need to do that- pay a subscription, I mean. A gemma 3 4b model will run on near potato hardware at usable speeds and achieves performance for many purposes on part with ChatGPT 3.5 turbo or better in many tasks much more beneficial than ad tech and min/max'ing media engagement. Or the free versions of many SOTA web LLMs, all free, to the world, if you have a web browser. |
|
|
|
| ▲ | 7 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | tonymet 6 hours ago | parent | prev | next [-] |
| What are you implying ? That he’s a hypocrite ? So he’s not allowed to have opinions ? If anything he’s in a better position than a random person . And Google is a massive enterprise, with hundreds of divisions. I imagine Pike and his peers share your reluctance |
| |
| ▲ | prepend 6 hours ago | parent [-] | | “I collected tons of money from Hitler and think Stalin is, like, super bad.” [sips Champagne] Of course, the scale is different but the sentiment is why I roll my eyes at these hypocrites. If you want to make ethical statements then you have to be pretty pure. | | |
| ▲ | tonymet 4 hours ago | parent [-] | | Are any of us better? We’re all sellouts here, making money off sleazy apps and products. I’m sorry but comparing Google to Stalin or Hitler makes me completely dismiss your opinion. It’s a middle school point of view. |
|
|
|
| ▲ | tensor 7 hours ago | parent | prev | next [-] |
| I agree completely. Ads have driven the surveillance state and enshitification. It's allowed for optimized propaganda delivery which in turn has led to true horrors and has helped undo a century of societal progress. |
| |
| ▲ | tavavex 6 hours ago | parent | next [-] | | This is a tangent, but ads have become a genuine cancer on our world, and it's sad to see how few people really think about it. While Rob Pike's involvement in this seems to be very minimal, the fact that Google is an advertising company through-and-through does weaken the words of such a powerful figure, at least a little bit. If I had a choice between deleting all advertising in the world, or deleting all genAI that the author hates, I would go for advertising every single time. Our entire world is owned by ads now, with digital and physical garbage polluting the internet and every open space in the real world around us. The marketing is mind-numbing, yet persuasive and well-calculated, a result of psychologists coming up with the best ways to abuse a mind into just buying the product over the course of a century. A total ban on commercial advertising would undo some of the damage done to the internet, reduce pointless waste, lengthen product lifecycles, improve competition, temper unsustainable hype, cripple FOMO, make deceptive strategies nonviable. And all of that is why it will never be done. | | |
| ▲ | blibble 6 hours ago | parent [-] | | > If I had a choice between deleting all advertising in the world, or deleting all genAI that the author hates, I would go for advertising every single time. but wait, in a few months, "AI" will be be funded entirely by advertising too! |
| |
| ▲ | xoxolian 5 hours ago | parent | prev [-] | | Yeah, I've built ad systems. Sometimes I'd give a presentation to some other department of programmers who worked on content, and someone would ask the tense question: Not to be rude, but aren't ads bad? And I'd promptly say: Ads are propaganda, and a security risk because it executes 3rd party code on your machine. All of us run adblockers. There was no need for me to point out that ads are also their revenue generator. They just had a burning moral question before they proceeded to interop with the propaganda delivery system, I guess. It would lead to unnecessary cognitive dissonance to convince myself of some dumb ideology to make me feel better about wasting so much of my one (1) known life, so I just take the hit and be honest about it. The moral question is what I do about it, if I intervene effectively to help dismantle such systems and replace them with something better. |
|
|
| ▲ | 7 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | montag 6 hours ago | parent | prev [-] |
| I disagree completely. |