| ▲ | vouaobrasil 6 days ago |
| > RC is a place for rigor. You should strive to be more rigorous, not less, when using AI-powered tools to learn, though exactly what you need to be rigorous about is likely different when using them. This brings about an important point for a LOT of tools, which many people don't talk about: namely, with a tool as powerful as AI, there will always be minority of people with healthy and thoughtful attitude towards its use, but a majority who use it improperly because its power is too seductive and human beings on average are lazy. Therefore, even if you "strive to be more rigorous", you WILL be a minority helping to drive a technology that is just too powerful to make any positive impact on the majority. The majority will suffer because they need to have an environment where they are forced not to cheat in order to learn and have basic competence, which I'd argue is far more crucial to a society that the top few having a lot of competence. The individualistic will say that this is an inevitable price for freedom, but in practice, I think it's misguided. Universities, for example, NEED to monitor the exam room, because otherwise cheating would be rampant, even if there is a decent minority of students who would NOT cheat, simply because they want to maximize their learning. With such powerful tools as AI, we need to think beyond our individualistic tendencies. The disciplined will often tout their balanced philosophy as justification for that tool use, such as this Recurse post is doing here, but what they are forgetting is that by promoting such a philosophy, it brings more legitimacy into the use of AI, for which the general world is not capable of handling. In a fragile world, we must take responsibility beyond ourselves, and not promote dangerous tools even if a minority can use them properly. This is why I am 100% against AI – no compromise. |
|
| ▲ | ctoth 6 days ago | parent | next [-] |
| Wait, you're literally advocating for handicapping everyone because some people can't handle the tools as well as others. "The disciplined minority can use AI well, but the lazy majority can't, so nobody gets to use it" I feel like I read this somewhere. Maybe a short story? Should we ban calculators because some students become dependent on them? Ban the internet because people use it to watch cat videos instead of learning? You've dressed up "hold everyone back to protect the incompetent" as social responsibility. I never actually thought I would find someone who read Harrison Bergeron and said "you know what? let's do that!"
But the Internet truly is a vast and terrifying place. |
| |
| ▲ | vouaobrasil 6 days ago | parent | next [-] | | A rather shallow reply, because I never implied that there should be enforced equality. For some reason, I get these sorts of "false dichotomy" replies constantly here, where the dichotomy is very strong exaggerated. Maybe it's due to the computer scientist's constant use of binary, who knows. Regardless, I only advocate for restricting technologies that are too dangerous, much in the same way as atomic weapons are highly restricted by people can still own knives and even use guns in some circumstances. I have nothing against the most intelligent using their intelligence wisely and doing more than the less intelligent, if only wise use is even possible. In the case of AI, I submit that it is not. | | |
| ▲ | usernamed7 6 days ago | parent | next [-] | | Why are you putting down a well reasoned reply as being shallow? Isn't that... shallow? Is it because you don't want people to disagree with you or point out flaws in your arguments? Because you seem to take an absolutist black/white approach and disregard any sense of nuanced approach. | | |
| ▲ | collingreen 6 days ago | parent | next [-] | | I don't have a dog in this fight but I think the counter argument was a terrible straw man. Op said it's too dangerous to put in general hands. Treating that like "protect the incompetent from themselves and punish everyone in the process" is badly twisting the point. A closer oversimplification is "protect the public from the incompetents". In my mind a direct, good faith rebuttal would address the actual points - either disagree that the worst usage would lead to harm of the public or make a point (like the op tees up) that risking the public is one of worthy tradeoffs of freedom. | | |
| ▲ | tptacek 6 days ago | parent [-] | | The original post concluded with the sentence "This is why I am 100% against AI – no compromise." Not "AI is too dangerous for general hands". | | |
| ▲ | vouaobrasil 5 days ago | parent [-] | | My arguments are nuanced, but there's nothing saying a final position has to be. Nuanced arguments can lead to a true unilateral position. |
|
| |
| ▲ | vouaobrasil 6 days ago | parent | prev [-] | | I do want people to argue or point out flaws. But presenting a false dichotomy is not a well-reasoned reply. | | |
| ▲ | Karrot_Kream 6 days ago | parent | next [-] | | The rebuttal is very simple. I'll try and make it a bit less emotionally charged and clear even if your original opinion did not appear to me to go through the same process: "While some may use the tool irresponsibly, others will not, and therefore there's no need to restrict the tool. Society shouldn't handicap the majority to accommodate the minority." You can choose to not engage with this critique but calling it a "false dichotomy" is in poor form. If anything, it makes me feel like you're not willing to entertain disagreement. You state that you want to start a discussion by expressing your opinion but I don't see a discussion here. I observe you expressing your opinion and dismissing criticism of that opinion as false. | |
| ▲ | pyman 6 days ago | parent | prev [-] | | > even if a minority can use them properly. Most students today are AI fluent. Most teachers aren't.
Students treat AI like Google Search, StackOverflow, GitHub, and every other dev tool. | | |
| ▲ | mmcclure 6 days ago | parent [-] | | Some students treat AI like those things. Others are effectively a meat proxy for AI. Both ends of the spectrum would call themselves "AI fluent." I don't think the existence of the latter should mean we restrict access to AI for everyone, but I also don't think it's helpful to pretend AI is just this generation's TI-83. |
|
|
| |
| ▲ | jononor 6 days ago | parent | prev | next [-] | | Why is "AI" (current LLM based systems) a danger on the level comparable to nukes?
Not saying that it is not, just would like to understand your reasoning. | |
| ▲ | ctoth 6 days ago | parent | prev [-] | | Who decides what technologies are too dangerous? You, apparently. AI isn't nukes - anyone can train a model at home. There's no centralized thing to restrict. So what's your actual ask? That nobody ever trains a model? That we collectively pretend transformers don't exist? You're dressing up bog-standard tech panic as social responsibility. Same reaction to every new technology: "This tool might be misused so nobody should have it." If you can't see the connection between that and Harrison Bergeron's "some people excel so we must handicap everyone," then you've missed Vonnegut's entire point. You're not protecting the weak - you're enforcing mediocrity and calling it virtue. | | |
| ▲ | ben_w 6 days ago | parent | next [-] | | > Who decides what technologies are too dangerous? You, apparently. I see takes like this from time to time about everything. They didn't say that. As with all similar cases, they're allowed to advocate for whatever being dangerous, and you're allowed to say it isn't, the people who decide is all of us collectively and when we're at our best we do so on the basis of the actual arguments. > AI isn't nukes - anyone can train a model at home. (1) They were using an extreme to illustrate the point. (2) Anyone can make a lot of things at home. I know two distinct ways to make a chemical weapon using only things I can find in a normal kitchen. That people can do a thing at home doesn't make the thing "not prohibited". | |
| ▲ | vouaobrasil 6 days ago | parent | prev | next [-] | | > Who decides what technologies are too dangerous? You, apparently. Again, a rather knee-jerk reply. I am opening up the discussion, and putting out my opinion. I never said I should be God and arbiter, but I do think people in general should have a discussion about it, and general discussion starts with opinion. > AI isn't nukes - anyone can train a model at home. There's no centralized thing to restrict. So what's your actual ask? That nobody ever trains a model? That we collectively pretend transformers don't exist? It should be something to consider. We could stop it by spreading a social taboo about it, denigrate the use of it, etc. It's possible. Many non techies already hate AI, and mob force is not out of the question. > You're dressing up bog-standard tech panic as social responsibility. Same reaction to every new technology: "This tool might be misused so nobody should have it." I don't have that reaction to every new technology personally. But I think we should ask the question of every new technology, and especially onces that are already disrupting the labor market. > If you can't see the connection between that and Harrison Bergeron's "some people excel so we must handicap everyone," then you've missed Vonnegut's entire point. You're not protecting the weak - you're enforcing mediocrity and calling it virtue. What people call excellent and mediocre these days is often just the capacity to be economically over-ruthless, rather than contribute any good to society. We already have a wealth of ways that people can excel, even if we eradicated AI. So there's definitely no limitation on intelligent individuals to be excellent, even if we destroyed AI. So your argument really doesn't hold. Edit: my goal isn't to protect the weak. I'd rather have everyone protected, including the very intelligent who still want to have a place to use their intelligence on their own and not be forced to use AI to keep up. | |
| ▲ | binary132 6 days ago | parent | prev [-] | | Hyphenatic phrasing detected. Deploying LLM snoopers. |
|
| |
| ▲ | atq2119 6 days ago | parent | prev | next [-] | | > Wait, you're literally advocating for handicapping everyone because some people can't handle the tools as well as others. No, they're arguing on the grounds that the tools are detrimental to the overwhelming majority in a way that also ends up being detrimental to the disciplined minority! I'm not sure I agree, but either way you aren't properly engaging their actual argument. | |
| ▲ | vouaobrasil 6 days ago | parent | prev | next [-] | | Second reply to your expanded comment: I think in some cases, some technologies are just versions of the prisoner's dilemma where no one is really better off with the technology. And one must decide on a case by case basis, similar to how the Amish decide what is best for their society on a case by case basis. Again, even your expanded reply shrieks with false dichotomy. I never said ban every possible technology, only ones that are sufficiently dangerous. | |
| ▲ | smohare 6 days ago | parent | prev [-] | | [dead] |
|
|
| ▲ | jononor 6 days ago | parent | prev | next [-] |
| I agree with your reasoning. But the conclusion seems to be throwing the baby out with the bathwater? The same line of thought can be used for any (new) tool, say a calculator, a computer or the internet. Shouldn't we try to find responsible ways of adopting LLMs, that empower the majority? |
| |
| ▲ | vouaobrasil 5 days ago | parent [-] | | > The same line of thought can be used for any (new) tool, say a calculator, a computer or the internet. Yes, the same line of thought can. But we must also take power into account. The severity of the negative effects of a technology is proportional to its power and a calculator is relatively week. > Shouldn't we try to find responsible ways of adopting LLMs, that empower the majority? Not if there is no responsible way to adopt them because they are fundamentally against a happy existence by their very nature. Not all technology empowers, even when used completely fairly. Some technology approaches a pure arms race scenario, especially when the proportion of its effect is mainly economic efficiency without true life improvement, at least for the majority. Of course, one can point to some benefits of LLMs, but my thesis is that the benefit/cost quantity approaches zero and thus crosses the point of diminishing returns to give us only a net negative in all possible worlds where the basic assumptions of human nature hold. |
|
|
| ▲ | thedevilslawyer 5 days ago | parent | prev [-] |
| Wait till you learn a minoity of prettier people end up having easier lives than the 90% majority. What will you recommend I wonder? |
| |
| ▲ | vouaobrasil 5 days ago | parent [-] | | I won't recommend anything. Every situation is different and you are arbitrarily transposing my argument rudely into another without much real thought, which is a shame. For instance, one thing you are ignoring is that we are evolutionary geared to handle situations of varying beauty. I could point out many more differences between the two situations but I won't because your lack of any intellectual effort doesn't even deserve a reply. | | |
|