Remix.run Logo
embedding-shape 11 hours ago

> I think it’s more important that we stop collectively pretending that we don’t understand how awful all of this is

Lord forbid if people disagree with you. I know Drew's vibe is always "I'm right because I'm the only one with the correct opinions", but it does get tiring after a while.

Not to say AI isn't having huge drawbacks being introduced, and aren't exactly worry-free, but why not change your frame of mind from "Why don't others understand how awful it is?!" to "People are seeing something I'm not, what am I missing?" so your article could actually contain something else than personal and emotions rants?

MSFT_Edging 10 hours ago | parent | next [-]

> "People are seeing something I'm not, what am I missing?"

I've seen people celebrate horrors beyond my comprehension. Cheer the deaths of innocent people because it may inch some abstract national goal closer to a similarly abstract measurement. Insist that lives in one place are worth less than lives in another.

Should I ask "what am I missing"?

I don't think so, sometimes you draw a line on moral or ethical grounds. Some of those lines should never be given the ability to be fluid. It will always be wrong to bomb a school of children, just like (for Drew and I) it will be wrong to rip the livelyhood from under millions of people's feet for shareholder value. It will be wrong to ignore damaging consequences to the environment. It will be wrong to insist a low quality imitation should ever hold the same value as the original idea.

nh23423fefe 8 hours ago | parent | next [-]

This unpersuasive moralizing demonstrates the blindspot GP is talking about. You invent a moral/ethical line because you can't find a good line.

using gpt is like bombing schools?

tovej 6 hours ago | parent [-]

I think they may be referring to the story about how the US bombed an Iranian school on the dirst day of the war due to data sourced from an LLM.

skeledrew 8 hours ago | parent | prev | next [-]

> rip the livelyhood from under millions of people's feet

I have never gotten this. How is livelihood being "ripped away"? There is enormous capability made available to anyone and everyone who wants to take hold of and do something with it. Just as it's on each individual to go through the process and pains of landing a job (or building a business, etc), it's also on each individual to keep up with changes that may affect their livelihood. If they want to keep it.

cindyllm 10 hours ago | parent | prev [-]

[dead]

dpatterbee 10 hours ago | parent | prev | next [-]

I think the point is that regardless of what benefits LLMs are bringing to the table, there are a list of downsides that Drew views as non-negotiables. It doesn't matter what other people are seeing, because he sees a fundamental issue underlying the entire premise.

It does seem like most people completely ignore the obvious harms caused by AI when talking about using LLMs for programming, as though somehow it is disconnected from the other deployments of the technology.

MisterTea 8 hours ago | parent | next [-]

> It does seem like most people completely ignore the obvious harms caused by AI when talking about using LLMs for programming, as though somehow it is disconnected from the other deployments of the technology.

I feel that the people who are completely ignoring the harms are the ones who need and/or benefit from it and do whatever it takes to justify their use of it. The rest are people who understand the harms and minimize interaction followed by the blissfully ignorant.

I was just talking to a content creator who uses AI at work social media platforms to display her personal projects. She talked about how she is fully aware of the harm social media platforms bring while acknowledging they empower her to present her work to the world without gatekeeping. AI allows her to power through boring office tasks but she loathes their use in the art world and replacing people in general.

bigbadfeline 9 hours ago | parent | prev | next [-]

> It does seem like most people completely ignore the obvious harms caused by AI when talking about using LLMs for programming, as though somehow it is disconnected from the other deployments of the technology.

I would insist that the deployments of a technology should be disconnected from the technology itself - I criticize AI too, and I get a lot of downvotes for it, but I try to separate the science of AI from its economics and politics.

The harms of AI and other technologies come from two sources 1. Capital destroying market bubbles and 2. Deployments motivated and enabled by political and moral corruption.

Both of these are in turn enabled and sustained by legislation. That is, we have to talk politics, not technology and not AI. AI has a great potential - both for improving human life and for making it a lot worse and which way it goes depends entirely on politics.

If we fail to cleanly separate these issues and keep moralizing about technology, we will be chasing red herrings and bumping heads in the dark all the while the tech is being deployed against us.

embedding-shape 10 hours ago | parent | prev [-]

> there are a list of downsides that Drew views as non-negotiables

Which is all fine and dandy. But why play the "You simply don't understand it as well as I do" rather than something more investigative and curious? Just fuels the whole "holier than thou" vibe Drew been trying to increase seemingly every day.

It's a disagreement of opinion, not some "I'm the only smart person who can realize this", which is why it kind of sours the entire piece.

chromacity 10 hours ago | parent | next [-]

> Which is all fine and dandy. But why play the "You simply don't understand it as well as I do"

I'll say this from the perspective of a person who publishes content online: because people's revealed preference is for content written this way. You can spend weeks polishing thoughtful, original content that will get few clicks, or you can crank out throwaway op-eds about AI and get thousands of likes and upvotes from people who just wanted to hear their own beliefs explained back to them.

My stuff appeared on HN a couple of times over the years and the less effort I put into it, the better it fared. The temptation to change your writing style and to offer increasingly more provocative and shallow opinions is difficult to resist.

My point is probably this: if you want to see better stuff, I think you gotta stop engaging with articles like this. Patrol /newest and upvote cool in-depth stuff.

lelanthran 7 hours ago | parent | prev | next [-]

> But why play the "You simply don't understand it as well as I do" rather than something more investigative and curious?

That's not the tone of the article; he uses the word "pretending". That tells me that he thinks that people do understand, but they don't want to admit that they understand because that would reveal their values.

dpatterbee 10 hours ago | parent | prev [-]

In fairness he pretty explicitly states that he thinks people do understand it, but are pretending not to to wash their hands of the consequences. I'm definitely not reading it in the same way you are.

mikkupikku 9 hours ago | parent [-]

It's a difference in values, not understanding. I understand that AI burns tons of power, and I don't care. Drew understands it the same as I, but he does care. The difference is in what people value, and relative to what.

sarchertech 11 hours ago | parent | prev | next [-]

Well I think he’s taken a moral stance against AI, so it doesn’t matter to him if other people find it useful.

embedding-shape 10 hours ago | parent [-]

Right, a difference of opinion, which is fine, OK and even good. But why paint it as "Obviously the rest of you aren't smart enough to understand" instead of "Other's disagree", seems really strange (although in-character).

tovej 6 hours ago | parent [-]

Don't put thing in quotes if they're not actual quotes. Especially not if you mischaracterize the article.

ChrisLTD 10 hours ago | parent | prev | next [-]

Why should he should say something he doesn't believe? We don't have to agree with him.

embedding-shape 10 hours ago | parent [-]

> I think it’s more important that we stop collectively pretending that we don’t understand how awful all of this is

Would be very different from say:

> I'd like to understand people who don't see it the same way as me, that it's mostly awful and not good.

Or similar, rather than "I'm right, everyone else don't understand it properly". Very HN-esque, but oh so tiring after 100s of articles in the exactly same vein from the same author.

tovej 6 hours ago | parent [-]

Not everything needs to be looked at from "both sides".

Drew is correct, the impact of generative "AI" on society is overwhelmingly negative.

grayhatter 10 hours ago | parent | prev | next [-]

> Lord forbid if people disagree with you.

This is too shallow of a take. Especially when your very next point objects to what he uses as a default reference frame that you disagree with. Lord forbid drew disagree about, I think priorities, and values?

> why not change your frame of mind from "Why don't others understand how awful it is?!" to "People are seeing something I'm not, what am I missing?"

It's the same question. I sympathize with both questions, I constant feel both frustrated, and broken with how few people care about quality, and participating fairly. I try very hard to find the positive aspects "everyone" claims llm codegen provides. I'm looking hard, and can't find them. It's painfully average, often worse so when it gets lost. It doesn't and can not help me, only get in the way, what am I doing wrong? Why is everyone missing something I see as obvious? But again, both could easily be true from both frames you suggest. "Why can't people identify this as trash" could very easily be followed by "what I'm I missing from the equation?" and be the same thought/idea.

> so your article could actually contain something else than personal and emotions rants?

I mean, it's titled, A Eulogy for Vim. That seems to be what it says on the tin, no?

lelanthran 7 hours ago | parent | prev [-]

> why not change your frame of mind from "Why don't others understand how awful it is?!" to "People are seeing something I'm not, what am I missing?"

Ironically, you are not considering that he is seeing something that you are not, but you are not asking "What am I missing?"

See, that sword cuts both ways.