▲ | Lerc 15 hours ago | |
To answer the question literally, I don't think it is possible to know. There are a lot of elements at play. - There are valid concerns about the dangers of AI. - There are hysterically overblown claims about the dangers of AI - There is public resentment growing proportionally to wealth inequality. The costs of AI development means that it is being done by those who can afford it, people are suspicious of their motives - There are people who feel that humans are in some way special, for religious or dogmatic reasons. There are many here on hacker news who are prepared to claim that LLMs (and computers in general) will never be capable of consciousness. As behaviours of AI's get closer to what looks like consciousness to some, their objections to the possibility will only grow louder. - How a UBI is implemented is as important as it's existence. - Many people believe that work is what gives life meaning. I think this is in decline as wealth inequality increases. I think increasing numbers of people are thinking that the notion that work brings meaning is conditioned by society to allow people to be exploited. Ultimately, in the long term, something must happen. If AI renders a huge proportion of the population without work, then that will cause seismic shifts. The distinction between UBI and work is the belief that you are owed a livelihood vs being owed a Job. If people who are no longer employed due to AI end up starving, there _will_ be revolution or subjugation. If people who are rendered jobless are given the means to survive, they will want a vocation. If they do not find their higher needs met then they will agitate for improvements. One of the strongest moderators of public disquiet has been Jobs, it keeps people busy, while giving them a means of survival that has the potential to be lost. It occupies the time of the worker while making them risk averse. Take away the jobs and you grant people time to organise and removes the risk of loosing ones job. Once people are in this state there are not very many possible paths, Either governments will facilitate progressive improvement in peoples lives to give them meaning, or they will not. This is obviously true because it must be one or the other ( either A or NOT A) If things improve, we win. If they do not improve, then you have an idle population with nothing to lose. The only alternatives are revolution or subjugation(or worse, genocide). Revolution becomes essentially a roll of the dice where there is a chance of improvement but just as often leads to subjugation or another revolution. There surely are those who imagine becoming powerful by owning machines that can do the work of millions. I don't believe they understand the consequences that would occur should they reach that goal. Then finally the thing that places this in the unknowable range, is we don't know if or when superintelligent AI will appear. Perhaps it will find a better way. It would be hard to imagine it being superintelligent and not coming up with a better solution for the crap we've landed ourselves in. |