Remix.run Logo
vouaobrasil 6 days ago

A rather shallow reply, because I never implied that there should be enforced equality. For some reason, I get these sorts of "false dichotomy" replies constantly here, where the dichotomy is very strong exaggerated. Maybe it's due to the computer scientist's constant use of binary, who knows.

Regardless, I only advocate for restricting technologies that are too dangerous, much in the same way as atomic weapons are highly restricted by people can still own knives and even use guns in some circumstances.

I have nothing against the most intelligent using their intelligence wisely and doing more than the less intelligent, if only wise use is even possible. In the case of AI, I submit that it is not.

usernamed7 6 days ago | parent | next [-]

Why are you putting down a well reasoned reply as being shallow? Isn't that... shallow? Is it because you don't want people to disagree with you or point out flaws in your arguments? Because you seem to take an absolutist black/white approach and disregard any sense of nuanced approach.

collingreen 6 days ago | parent | next [-]

I don't have a dog in this fight but I think the counter argument was a terrible straw man. Op said it's too dangerous to put in general hands. Treating that like "protect the incompetent from themselves and punish everyone in the process" is badly twisting the point. A closer oversimplification is "protect the public from the incompetents".

In my mind a direct, good faith rebuttal would address the actual points - either disagree that the worst usage would lead to harm of the public or make a point (like the op tees up) that risking the public is one of worthy tradeoffs of freedom.

tptacek 6 days ago | parent [-]

The original post concluded with the sentence "This is why I am 100% against AI – no compromise." Not "AI is too dangerous for general hands".

vouaobrasil 5 days ago | parent [-]

My arguments are nuanced, but there's nothing saying a final position has to be. Nuanced arguments can lead to a true unilateral position.

vouaobrasil 6 days ago | parent | prev [-]

I do want people to argue or point out flaws. But presenting a false dichotomy is not a well-reasoned reply.

Karrot_Kream 6 days ago | parent | next [-]

The rebuttal is very simple. I'll try and make it a bit less emotionally charged and clear even if your original opinion did not appear to me to go through the same process:

"While some may use the tool irresponsibly, others will not, and therefore there's no need to restrict the tool. Society shouldn't handicap the majority to accommodate the minority."

You can choose to not engage with this critique but calling it a "false dichotomy" is in poor form. If anything, it makes me feel like you're not willing to entertain disagreement. You state that you want to start a discussion by expressing your opinion but I don't see a discussion here. I observe you expressing your opinion and dismissing criticism of that opinion as false.

pyman 6 days ago | parent | prev [-]

> even if a minority can use them properly.

Most students today are AI fluent. Most teachers aren't. Students treat AI like Google Search, StackOverflow, GitHub, and every other dev tool.

mmcclure 6 days ago | parent [-]

Some students treat AI like those things. Others are effectively a meat proxy for AI. Both ends of the spectrum would call themselves "AI fluent."

I don't think the existence of the latter should mean we restrict access to AI for everyone, but I also don't think it's helpful to pretend AI is just this generation's TI-83.

jononor 6 days ago | parent | prev | next [-]

Why is "AI" (current LLM based systems) a danger on the level comparable to nukes? Not saying that it is not, just would like to understand your reasoning.

ctoth 6 days ago | parent | prev [-]

Who decides what technologies are too dangerous? You, apparently.

AI isn't nukes - anyone can train a model at home. There's no centralized thing to restrict. So what's your actual ask? That nobody ever trains a model? That we collectively pretend transformers don't exist?

You're dressing up bog-standard tech panic as social responsibility. Same reaction to every new technology: "This tool might be misused so nobody should have it."

If you can't see the connection between that and Harrison Bergeron's "some people excel so we must handicap everyone," then you've missed Vonnegut's entire point. You're not protecting the weak - you're enforcing mediocrity and calling it virtue.

ben_w 6 days ago | parent | next [-]

> Who decides what technologies are too dangerous? You, apparently.

I see takes like this from time to time about everything.

They didn't say that.

As with all similar cases, they're allowed to advocate for whatever being dangerous, and you're allowed to say it isn't, the people who decide is all of us collectively and when we're at our best we do so on the basis of the actual arguments.

> AI isn't nukes - anyone can train a model at home.

(1) They were using an extreme to illustrate the point.

(2) Anyone can make a lot of things at home. I know two distinct ways to make a chemical weapon using only things I can find in a normal kitchen. That people can do a thing at home doesn't make the thing "not prohibited".

vouaobrasil 6 days ago | parent | prev | next [-]

> Who decides what technologies are too dangerous? You, apparently.

Again, a rather knee-jerk reply. I am opening up the discussion, and putting out my opinion. I never said I should be God and arbiter, but I do think people in general should have a discussion about it, and general discussion starts with opinion.

> AI isn't nukes - anyone can train a model at home. There's no centralized thing to restrict. So what's your actual ask? That nobody ever trains a model? That we collectively pretend transformers don't exist?

It should be something to consider. We could stop it by spreading a social taboo about it, denigrate the use of it, etc. It's possible. Many non techies already hate AI, and mob force is not out of the question.

> You're dressing up bog-standard tech panic as social responsibility. Same reaction to every new technology: "This tool might be misused so nobody should have it."

I don't have that reaction to every new technology personally. But I think we should ask the question of every new technology, and especially onces that are already disrupting the labor market.

> If you can't see the connection between that and Harrison Bergeron's "some people excel so we must handicap everyone," then you've missed Vonnegut's entire point. You're not protecting the weak - you're enforcing mediocrity and calling it virtue.

What people call excellent and mediocre these days is often just the capacity to be economically over-ruthless, rather than contribute any good to society. We already have a wealth of ways that people can excel, even if we eradicated AI. So there's definitely no limitation on intelligent individuals to be excellent, even if we destroyed AI. So your argument really doesn't hold.

Edit: my goal isn't to protect the weak. I'd rather have everyone protected, including the very intelligent who still want to have a place to use their intelligence on their own and not be forced to use AI to keep up.

binary132 6 days ago | parent | prev [-]

Hyphenatic phrasing detected. Deploying LLM snoopers.