▲ | ctoth 6 days ago | |
Who decides what technologies are too dangerous? You, apparently. AI isn't nukes - anyone can train a model at home. There's no centralized thing to restrict. So what's your actual ask? That nobody ever trains a model? That we collectively pretend transformers don't exist? You're dressing up bog-standard tech panic as social responsibility. Same reaction to every new technology: "This tool might be misused so nobody should have it." If you can't see the connection between that and Harrison Bergeron's "some people excel so we must handicap everyone," then you've missed Vonnegut's entire point. You're not protecting the weak - you're enforcing mediocrity and calling it virtue. | ||
▲ | ben_w 6 days ago | parent | next [-] | |
> Who decides what technologies are too dangerous? You, apparently. I see takes like this from time to time about everything. They didn't say that. As with all similar cases, they're allowed to advocate for whatever being dangerous, and you're allowed to say it isn't, the people who decide is all of us collectively and when we're at our best we do so on the basis of the actual arguments. > AI isn't nukes - anyone can train a model at home. (1) They were using an extreme to illustrate the point. (2) Anyone can make a lot of things at home. I know two distinct ways to make a chemical weapon using only things I can find in a normal kitchen. That people can do a thing at home doesn't make the thing "not prohibited". | ||
▲ | vouaobrasil 6 days ago | parent | prev | next [-] | |
> Who decides what technologies are too dangerous? You, apparently. Again, a rather knee-jerk reply. I am opening up the discussion, and putting out my opinion. I never said I should be God and arbiter, but I do think people in general should have a discussion about it, and general discussion starts with opinion. > AI isn't nukes - anyone can train a model at home. There's no centralized thing to restrict. So what's your actual ask? That nobody ever trains a model? That we collectively pretend transformers don't exist? It should be something to consider. We could stop it by spreading a social taboo about it, denigrate the use of it, etc. It's possible. Many non techies already hate AI, and mob force is not out of the question. > You're dressing up bog-standard tech panic as social responsibility. Same reaction to every new technology: "This tool might be misused so nobody should have it." I don't have that reaction to every new technology personally. But I think we should ask the question of every new technology, and especially onces that are already disrupting the labor market. > If you can't see the connection between that and Harrison Bergeron's "some people excel so we must handicap everyone," then you've missed Vonnegut's entire point. You're not protecting the weak - you're enforcing mediocrity and calling it virtue. What people call excellent and mediocre these days is often just the capacity to be economically over-ruthless, rather than contribute any good to society. We already have a wealth of ways that people can excel, even if we eradicated AI. So there's definitely no limitation on intelligent individuals to be excellent, even if we destroyed AI. So your argument really doesn't hold. Edit: my goal isn't to protect the weak. I'd rather have everyone protected, including the very intelligent who still want to have a place to use their intelligence on their own and not be forced to use AI to keep up. | ||
▲ | binary132 6 days ago | parent | prev [-] | |
Hyphenatic phrasing detected. Deploying LLM snoopers. |