| ▲ | blibble 10 hours ago | |||||||
> You would expect that voices that have so much weight would be able to evaluate a new and clearly very promising technology with better balance have you considered the possibility that it is your position that's incorrect? | ||||||||
| ▲ | antirez 10 hours ago | parent [-] | |||||||
No, because it's not a matter of who is correct or not, in the void of the space. It's a matter of facts, and it is correct who have a position that is grounded on facts (even if such position is different from a different grounded position). Modern AI is already an extremely powerful tool. Modern AI even provided some hints that we will be able to do super-human science in the future, with things like AlphaFolding already happening and a lot more to come potentially. Then we can be preoccupied about jobs (but if workers are replaced, it is just a political issue, things will be done and humanity is sustainable: it's just a matter of avoiding the turbo-capitalist trap; but then, why the US is not already adopting an universal healthcare? There are so many better battles that are not fight with the same energy). Another sensible worry is to get extinct because AI potentially is very dangerous: this is what Hinton and other experts are also saying, for instance. But this thing about AI being an abuse to society, useless, without potential revolutionary fruits within it, is not supported by facts. AI potentially may advance medicine so much that a lot of people may suffer less: to deny this path because of some ideological hate against a technology is so closed minded, isn't it? And what about all the persons in the earth that do terrible jobs? AI also has the potential to change this shitty economical system. | ||||||||
| ||||||||