| |
| ▲ | cortesoft 12 hours ago | parent | next [-] | | Even supposing we could somehow get the political will to do this, how would you write such a law? What counts as “AI frontier research”? How would you write a regulation around that that isn’t trivial to bypass without banning general computing itself? | | |
| ▲ | SpicyLemonZest 12 hours ago | parent [-] | | As I said in a sibling comment, we're fortunate that training modern AIs requires large quantities of specialized compute. We just have to restrict GPU sales and outlaw GPU farms. I don't deny that it would be a seismic, controversial change, but I don't think it's terribly hard to implement if we can reach a consensus that we want to implement it. |
| |
| ▲ | neonstatic 13 hours ago | parent | prev [-] | | This is never going to happen. Is something can be done, it will be done. | | |
| ▲ | happytoexplain 12 hours ago | parent | next [-] | | >If something can be done, it will be done. What does this mean? It's obviously false on its face. | | |
| ▲ | neonstatic 4 hours ago | parent [-] | | It means that if something is physically possible, someone will be doing it, regardless of legal, moral, or social barriers. False on its face? Not that long ago, global public opinion was mortified at the news, that newborn twins in China have been genetically modified. I am old enough to remember the outrage in the late 90s as the world watched the first cloned sheep grow up, get sick, and die. It was possible to do, so someone had done it. The point is - with the use of law, morality, social pressure, we can moderate the frequency and scale of some phenomena, but we cannot stop it. I think this idea is what prevents some bans. "If the Chinese can do it, and we stop ourselves from doing it, they will gain an advantage and we would lose". Substitute "the Chinese" with whoever is the opponent at any given point in time and you have a rather plausible explanation for why things were the way they were. |
| |
| ▲ | SpicyLemonZest 12 hours ago | parent | prev [-] | | There were historical worries about whether a ban would be feasible, but frontier AI research as we understand it today requires large amounts of specialized compute. Even if we couldn't or wouldn't destroy the chips, we could imprison anyone who tries to start a large training run, the same way we imprison anyone who tries to buy enriched uranium. | | |
| ▲ | neonstatic an hour ago | parent [-] | | Yes, that is true, but it's not my point. I am not saying it'd be impossible to find people who are doing it. My point is that there will always be a group of people, who'd be willing to do potentially dangerous things as long as those things are possible and are believed to provide some sort of advantage. For that reason, those people would either be in decision making positions or have a good enough offer to decision makers. Speaking of uranium - I don't think AI is anything like it (although the AI industry propaganda really wants us to believe that), but even there we have examples of countries that were pursuing nuclear weapons both successfully and unsuccessfully as well as countries that could have them, but choose not to. So the ban itself isn't necessarily the main point here. |
|
|
|