| ▲ | Richard Dawkins: Claude (for one) has 'passed' the Turing Test(telegraph.co.uk) |
| 3 points by HocusLocus 7 hours ago | 11 comments |
| |
|
| ▲ | HocusLocus 7 hours ago | parent | next [-] |
| We have grown used to the old rambling responses of Eliza, that wonder-tool of a bygone era. We are too easy impressed by semantics and subtlety of language. The one thing Dawkins might not be aware of, in his turn-based exchange is how many actual watts are being expended to polish Claude's presentation. There are whole datacenters worth of iron being hidden behind this exchange. Is this level of 'intelligence' sustainable in the long run when pitted against the 12-24 watt human brain? It's a hell of a better thing to do than cryptocurrency tho. Proof of work for max greed was not sustainable either. |
| |
| ▲ | tim333 4 hours ago | parent | next [-] | | Googling I got the a power use estimate as Standard Query (Sonnet): ~0.84 Wh Assuming Dawkins made 100 queries over the the weekend we have AI power use 84 Wh, Dawkins brain at say 20W x 48 hrs = 960 Wh. Of course when you include the rest of the cost of powering a Dawkins including food and heating his house the human energy use goes higher. | |
| ▲ | repelsteeltje 7 hours ago | parent | prev [-] | | Watts and sustainability were never part of the Turing test, of course. It was conceived as more of a philosophical argument than a practical test. For instance, consider Searle's Chinese Room counter argument [1]: Millions of humans emulating a computer program isn't the most efficient use of resources either, off course. [1] https://en.wikipedia.org/wiki/Chinese_room |
|
|
| ▲ | Kadam257 7 hours ago | parent | prev [-] |
| The problem isn't that Claude's responses are unimpressive. They are impressive. The problem is that impressive outputs don't tell you anything about underlying mechanism, and mechanism is what consciousness is about. A system optimized via RLHF to produce responses that make smart humans say "wow" will produce responses that make smart humans say "wow". That's what it was trained to do. |
| |
| ▲ | tim333 4 hours ago | parent | next [-] | | >mechanism is what consciousness is about I don't see it that way - it's more about awareness of sights sounds feelings thoughts and the like. I've got little idea what mechanism my brain uses and don't think it matters much. It's like Descartes didn't say I think, therefore I may be but only if the thinking uses biological neurons. | |
| ▲ | DFHippie 7 hours ago | parent | prev [-] | | The problem the Turing test was meant to solve is that we had, and still have, no means of recognizing a conscious mechanism. We lack a theory of consciousness that can be used to make a better test than "It could fool me", so the Turing test accepts that as the test. In other words, the mechanism may be what consciousness is about, but we can't say anything useful about this as relates to consciousness. | | |
| ▲ | repelsteeltje 7 hours ago | parent | next [-] | | > We lack a theory of consciousness [...] Nitpick: off course we don't really lack a theory of consciousness. It's just that Alan Turing choose to ignore all the existing prior discourse in humanities and philosophy. | | |
| ▲ | tim333 3 hours ago | parent | next [-] | | Nitpick nitpick: If you look at Turing's paper https://courses.cs.umbc.edu/471/papers/turing.pdf and ^F for consciousness you'll find that's not entirely true. | |
| ▲ | pmontra 6 hours ago | parent | prev [-] | | There are many theories of consciousness but nobody knows if one of them is correct and nobody can use one of them to build a conscious machine. Compare that to theories of physics. None of them is 100% correct but they give us the tools we are using to write these messages. | | |
| ▲ | tim333 3 hours ago | parent [-] | | I've got a theory of consciousness, not a very complicated one, that could be used in a machine. Basically that it evolved as a practical way for animals to make decisions like whether to run from a predator. To do that info from the billions of neurons handling senses memories and the like filter down to something like a situation summary, which is basically what the animal is conscious of which then feeds to the decision making, thinking and remembering and neurons. It would be quite interesting if/when someone tries that to see how close it is or isn't to nature. |
|
| |
| ▲ | foldr 7 hours ago | parent | prev [-] | | The Turing test isn't a test for whether a machine is conscious but whether it can think. |
|
|