| ▲ | aerodexis a day ago |
| Interesting argument for AI ethics in general. It takes the form of "guns don't kill people - people kill people". |
|
| ▲ | glhaynes a day ago | parent | next [-] |
| An argument that I have some sympathy for, while still being moderately+ in favor of gun control (here in the USA where I'm a citizen). It seems that gun control—though imperfect—in regions that have implemented it has had a good bit of success and the legitimate/non-harmful capabilities lost seem worth it to me in trade for the gains. (Reasonable people can disagree here!) Whereas it seems to me that if we accept the proposition that the vast majority of code in the future is going to be written by AI (and I do), these valuable projects that are taking hard-line stances against it are going to find themselves either having to retreat from that position or facing insurmountable difficulties in staying relevant while holding to their stance. |
| |
| ▲ | estebank a day ago | parent | next [-] | | > these valuable projects that are taking hard-line stances against it are going to find themselves either having to retreat from that position or facing insurmountable difficulties in staying relevant while holding to their stance. It is the conservative position: it will be easier to walk back the policy and start accepting AI produced code some time down the road when its benefits are clearer than it will be to excise AI produced code from years prior if there's a technical or social reason to do that. Even if the promise of AI is fulfilled and projects that don't use it are comparatively smaller, that doesn't mean there's no value in that, in the same way that people still make furniture in wood with traditional methods today even if a company can make the same widget cheaper in an almost fully automated way. | |
| ▲ | duskdozer 11 hours ago | parent | prev | next [-] | | The AI hype machine is pushing the "inevitability" and "left behind" sentiments to make it a self-fulfilling prophecy, like https://en.wikipedia.org/wiki/Pluralistic_ignorance, and they have the profit and power incentives to do so and drive mass adoption. It is far from certain that AI will be indispensable or that people will "fall behind" for not using it. Why would the AI-fans even care if others who decide not to use it fall behind? Wouldn't they get to point and laugh and enjoy the benefits of "keeping up"? Their fervor should be looked at with suspicion. | | |
| ▲ | glhaynes 4 hours ago | parent [-] | | If you're addressing this to me: you need to separate my description of how I perceive things from any effort/desire on my part to make that come to pass. I don't expect to stand to gain if AI continues to get better at coding — most likely just the opposite; this is the first time in my career that I've ever felt much anxiety about whether I'd be able to find work in my field in the future. There are many others like me who share this expectation, and, while we certainly may be wrong, it's not because of some sinister plan to make the prophecy come true. (There are certainly some who do have sinister/profit-seeking motives, of course!) |
| |
| ▲ | datsci_est_2015 a day ago | parent | prev [-] | | > It seems that gun control—though imperfect—in regions that have implemented it has had a good bit of success and the legitimate/non-harmful capabilities lost seem worth it to me in trade for the gains. This is even true despite the fact that there are bad actors only a few minutes drive away in many cases (Chicago->Indiana border, for example). |
|
|
| ▲ | jazzyjackson a day ago | parent | prev | next [-] |
| Unfortunately ChatGPT turned “text continuation” into “separate entity you can talk to” |
| |
| ▲ | aerodexis a day ago | parent [-] | | The desire to anthropomorphize LLMs is super interesting. People naturally anthropomorphize technology (even printers: "why are you not working!?"). It's a natural and useful heuristic. However, I can easily see how chatGPT would want to intensify this tendency in order to sell the technology's "agency" and the promise that it can solve all your problems. However, since it's a heuristic, it papers over a lot of details that one would do well to understand. (as an aside - this reminds me of the trend of Object Oriented Ontology that specifically /tried/ to imbue agency onto large-scale phenomena that were difficult to understand discretely. I remember "global warming" being one of those things - and I can see now how this philosophy would have done more to obscure the dominion of experts wrt that topic) |
|
|
| ▲ | a day ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | dataflow a day ago | parent | prev [-] |
| I don't think any side on the issue of gun ownership has ever claimed that statement is false, so I'm not sure what your point is. |
| |
| ▲ | johnnyanmac a day ago | parent [-] | | The point is thst this is a common pro-gun argument to deflect from the fact that making guns harder to own does in fact reduce gun violence. Which is how much of the rest of the world works. But post Sandy Hook, it's clear which side prevailed in this argument. | | |
| ▲ | dataflow 19 hours ago | parent [-] | | Except it seems to be arguing in the exact opposite direction, and about the other side of the problem? Those in favor of gun control aren't trying to lower human responsibility, they're trying to place stricter limits on the guns than the status quo. Those against gun control are trying to loosen limits on the guns. Here this person is proposing making individual responsibility stricter compared than what it is today. And they're not arguing for loosening limits on the tech either. Isn't that practically the opposite of your analogy? |
|
|