| ▲ | GuB-42 3 hours ago | |
Guns primary purpose is to kill. The primary purpose of genAI (image generation goes beyond the scope of LLMs) is not to mislead, they are used successfully by millions of people for purposes that are in no way nefarious. It includes valuable contributions to fields like medicine. Like most important advances like plastics, nuclear power, diesel engines, synthetic fertilizers, computers and the internet, good and bad things came out of it. It is like saying that plastics screw up everything they touch, for example when a plastic part is used to replace a more durable metal part, but before realizing that plastics are everywhere in our lives, often without a suitable replacement material. | ||
| ▲ | hansmayer 2 hours ago | parent [-] | |
:) Wow you are getting ahead of yourself aren't you. LLMs are dangerous tools that any moron nowadays has access to. They can fabricate images of wolves roaming the streets, hallucinate fake arguments that sound really convincing and even coach people into committing a suicide, as you probably heard in the recent at least a dozen cases. I can't quite see the comparison you are making. It's not like you have access to a nuclear reactor or whatever other dangerous technology you wanted to lump in with it, at your finger tips, do you? This is because those other dangerous technologies are carefully managed. So now follow where I am taking this, I'll be explain it really simple. Guns are really easily accessible to people in large parts of the US. So some people will use guns to kill other people. Sometimes its an accident, like kids playing with daddy's gun and shooting their sibling. Some people argue that guns should be restricted, as it would reduce such accidents and incidents. But some other people say "guns dont kill people - people kill people". Now LLMs are as a dangerous technology, accessible to most anyone not just in the US, but around the world. Also easier to use. So anyone with basic command of language and ability to clank on a keyboard can "use" it. To the point that some people not only harm others, like this Korean champ, but also themselves, like those people who were goaded into committing suicide. Now my point was, and it should not have been that hard to see, that your argument is precisely of the "guns don't kill people" variety. The point is, if the chatbots that we pompously resigned to call "artificial intelligence" make mistakes 30-40% of the time, and we use them to verify information, they are dangerous and should not be allowed to use for such purposes as misleading generating public. Because that is dangerous. Now, in your small little selfish world, maybe they are "everywhere", meaning, you can offload your thinking to them, and maybe you even use them to write emails and summarise other people emails so you don't completely drown in your boring office job. But it does not mean you should compare them to anything you listed above. Those small "benefits" do not account for overall shittines of this so-called technology. | ||