| ▲ | gyomu 14 hours ago | |
The only thing that matters in the discussion around intelligence is purpose and intent. Intelligence is always applied through intent, and in service of a purpose. What is the broader context of OP trying to prove a theorem here? There are multiple layers of purpose and intent involved (so he can derive the satisfaction of proving a result, so he can keep publishing and keep his job, so their university department can be competitive, etc), but they all end up pointing at humans. Computers aren’t going to be spinning in the background proving theorems just because. They will do so because humans intend for them to, in service of their own purposes. In any discussion about AI surpassing humans in skills/intelligence, the chief concern should be in service of whom. Tech leaders (ie the people controlling the computers on which the AIs run) like to say that this is for the benefit of all humanity, and that the rewards will be evenly distributed; but the rewards aren’t evenly distributed today, and the benefits are in the hands of a select few; why should that change at their hands? If AI is successful to the extent which pundits predict/desire, it will likely be accompanied with an uprising of human workers that will make past uprisings (you know, the ones that banned child labor and gave us paid holidays) look like child’s play in comparison. | ||
| ▲ | laterium 3 hours ago | parent [-] | |
Which tech leader said the rewards will be distributed evenly? That sounds more like a rhetorical strawman for you to dunk on to make a point. It would be similar to saying "Most HN commenters argue that all the benefits of AI will go to the billionaires, but actually they're all wrong because some of it will in fact go to average people" | ||