| ▲ | palmotea a day ago |
| One way to achieve superhuman intelligence in AI is to make humans dumber. |
|
| ▲ | ryao 21 hours ago | parent | next [-] |
| This reminds me of the guy who said he wanted computers to be as reliable as TVs. Then smart TVs were made and TV quality dropped to satisfy his goal. |
| |
| ▲ | SoftTalker 21 hours ago | parent [-] | | The TVs prior to the 1970s/solid state era were not very reliable. They needed repair often enough that "TV repairman" was a viable occupation. I remember having to turn on the TV a half hour before my dad got home from work so it would be "warmed up" so he could watch the evening news. We're still at that stage of AI. | | |
| ▲ | ryao 19 hours ago | parent [-] | | The guy started saying it in the 80s or 90s when that issue had been fixed. Ge is the Minix guy if I recall correctly. |
|
|
|
| ▲ | xrd a day ago | parent | prev | next [-] |
| If you came up with that on your own then I'm very impressed. That's very good. If you copied it, I'm still impressed and grateful you passed it on. |
| |
|
| ▲ | boringg a day ago | parent | prev | next [-] |
| The cultural revolution approach to AI. |
|
| ▲ | imoverclocked a day ago | parent | prev | next [-] |
| That’s only if our stated goal is to make superhuman AI and we use AI at every level to help drive that goal. Point received. |
|
| ▲ | 6510 a day ago | parent | prev | next [-] |
| I thought: A group working together poorly isn't smarter than the smartest person in that group. But it's worse, A group working together poorly isn't smarter than the fastest participant in the group. |
| |
| ▲ | trentlott a day ago | parent | next [-] | | That's a fascinatingly obvious idea and I'd like to see data that supports it. I assume there must be some. | |
| ▲ | jimmygrapes a day ago | parent | prev [-] | | anybody who's ever tried to play bar trivia with a team should recognize this | | |
| ▲ | tengbretson 10 hours ago | parent | next [-] | | Being timid in bar trivia is the same as being wrong. | |
| ▲ | rightbyte 15 hours ago | parent | prev [-] | | What do you mean? You can protest against bad but fast answers and check another box with the pen. |
|
|
|
| ▲ | yieldcrv a day ago | parent | prev [-] |
| Right, superhuman would be relative to humans but intelligence as a whole is based on a human ego of being intellectually superior |
| |
| ▲ | caseyy a day ago | parent [-] | | That’s an interesting point. If we created super-intelligence but it wasn’t anthropomorphic, we might just not consider it super-intelligent as a sort of ego defence mechanism. Much good (and bad) sci-fi was written about this. In it, usually this leads to some massive conflict that forces humans to admit machines as equals or superiors. If we do develop super-intelligence or consciousness in machines, I wonder how that will all go in reality. | | |
| ▲ | yieldcrv 19 hours ago | parent [-] | | Some things I think about are how different the goals could be For example, human and biological based goals are around self-preservation and propagation. And this in turn is about resource appropriation to facilitate that, and systems of doing that become wealth accumulation. Species that don't do this don't continue existing. A different branch of evolution of intelligence may take a different approach, that allows its affects to persist anyway. | | |
| ▲ | caseyy 15 hours ago | parent [-] | | This reminds me of the "universal building blocks of life" or the "standard model of biochemistry" I learned at school in the 90s. It held that all life requires water, carbon-based molecules, sunlight, and CHNOPS (carbon, hydrogen, nitrogen, oxygen, phosphorus and sulfur). Since then, it's become clear that much life in the deep sea is anaerobic, doesn't use phosphorus, and may thrive without sunlight. Sometimes anthropocentrism blinds us. It's a phenomenon that's quite interesting. |
|
|
|