| ▲ | hollerith 2 hours ago | |
The whole world decided in the 1970s not to pursue the technology of germ-line genetic engineering of humans, and that decision has stood. People similar to you were saying in the 1950s and later that it was inevitable that nuclear weapons would be used in anger in massive attacks. Although the people in charge are tentatively for AI "progress", if that ever changes, they can and will put a stop to large AI training runs and make it illegal for anyone they don't trust to teach, learn or publish about fundamental algorithmic "improvements" to AI. Individuals and groups pursuing "improvements" will not be able to accept grant money or investment money or generate revenue from AI-based services. That won't stop all research on such improvements (because some AI researchers are very committed), but it will slow it down to a rate much much slower than the current rate (because the current fast rate depends of rapid communication between researchers who don't each other well, and if communicating about the research were to become illegal, then a researcher can communicate only with those researchers he knows won't rat him out) essentially stopping AI "progress" unless (unluckily for the human species) at the time of the ban, the committed researchers were only one small step away from some massive algorithmic improvement that can be operationalized using the compute resources at their disposal (i.e., much less than the resources they have now). Will the power elite's attitude towards AI change? I don't know, but if they ever come to have an accurate understanding of the situation, they will recognize that AI "progress" is a potent danger to them personally, and they will shut it down. It's not a situation like the industrial revolution in England in which texile workers were massively adversely affected (or believed they were) but the people running England were mostly insulated from any adverse effects. In the current situation, the power elite is definitely not insulated from severe adverse consequences if an AI lab creates an AI that is much more competent that the most competent human institutions (e.g., the FBI) and the lab fails to keep the AI under control. And it will fail if it were to use anything like the methods and bodies of knowledge AI labs have been using up to now. And there are very bright people with funding doing their best to explain that to the elite. Those of you who want AI "progress" to continue until the world is completely transformed need to hope that the power elite are collectively too stupid to recognize a potent short-term threat to their own survival (or the transformation can be completed before the power elite wake up and react). And in my estimation, that is not inevitable. | ||