| ▲ | captainbland 14 hours ago |
| Today: this Tomorrow: trillions invested in new technology for simulating human torture accurately at the molecular level, requiring twice the level of all consumer electricity use on the planet. Advocates claim "all use is valid". |
|
| ▲ | FrustratedMonky 14 hours ago | parent | next [-] |
| Is this a reference to "Torment Nexus"? |
| |
| ▲ | captainbland 14 hours ago | parent | next [-] | | While I'm sure that subconsciously influenced what I wrote, it was more a general jab at the sentiment that negative externalities can always be justified so long as a technology has users who prefer to use it. | | |
| ▲ | z2 14 hours ago | parent | next [-] | | Ah, I thought you were just referring to the decades-long use of the most massive supercomputers to simulate nuclear arsenal maintenance and explosions (maybe literally at the molecular/atomic/sub-atomic level). | |
| ▲ | FrustratedMonky 14 hours ago | parent | prev [-] | | Yeah. Did you see article that they made a brain organoid (actual brain neurons on a chip) play DOOM?. What are those neurons experiencing? | | |
| ▲ | AlexeyBrin 14 hours ago | parent | next [-] | | > What are those neurons experiencing? A reasonable explanation is that a few neurons probably don't have conscience so they can't really experience anything. | | |
| ▲ | captainbland 13 hours ago | parent | next [-] | | It's an interesting question as to what that level is likely to be though. The chip in question apparently has around 800,000 neurons (https://www.forbes.com/sites/johnkoetsier/2025/06/04/hardwar...) so not a trivial quantity which makes it significantly more complex than most insects' forebrains but still less complex than any mammal. I think once they're able to put 15 million such neurons on a single device that puts them in the range of more relatable animals like mice and Syrian hamsters, and I also expect that relatability is also what will drive most opinions about consciousness. | |
| ▲ | p_j_w 11 hours ago | parent | prev [-] | | >a few neurons probably don't have conscience Given our piss poor understanding of consciousness, I have to ask: on what grounds do you make this claim? |
| |
| ▲ | layer8 14 hours ago | parent | prev | next [-] | | > What are those neurons experiencing? Doom. (Obviously.) | |
| ▲ | captainbland 13 hours ago | parent | prev | next [-] | | I hadn't until you mentioned it but now I have! I expect one day they'll generate a language model on one and then we can just ask it, assuming they don't give it a special rule about never describing its experiences. | | |
| ▲ | wizzwizz4 12 hours ago | parent [-] | | The language model's output would be informed by its weights, not by its experiences as wetware. Substrate does not make a computation special: that's the whole point of the Chinese Room thought experiment. What mechanism are you imagining that would allow a LLM built of neurons to describe what it's like to be made of neurons, when a LLM built of GPUs cannot describe what it's like to be organised sand? The LLM in the GPU cluster is evaluated by performing the same calculations that could be performed by intricate clockwork, or very very slowly by generations of monks using pencil and paper. Just as the monks have thoughts and feelings, it is conceivable (though perhaps impossible) that the brain tissue implementing a LLM has conscious experience; but if so, that experience would not be reflected in the LLM's output. | | |
| ▲ | captainbland 11 hours ago | parent [-] | | When I say language model, I mean of whatever form would make it native to the wetware medium. This brings with it a few key distinctions. The distinction I think is most relevant is that human neurons including in chips like the CL1 have the capability to dynamically re-organise topologically (i.e. neuroplasticity) which is something that computed LLMs can't do, which have a fixed structure with weights. We can't assume that a computer based neural network will have the same emergent behaviours as a biological one or vice versa. The interesting point for me is in the neuroplasticity, because it implies that the networks which are specialised for language could start forming synapses which connect them to the parts which are more specialised to play doom giving rise to the possibility that this could be used for introspection | | |
| ▲ | wizzwizz4 5 hours ago | parent [-] | | It is meaningful to consider this case. My general objection does not apply here. |
|
|
| |
| ▲ | 14 hours ago | parent | prev [-] | | [deleted] |
|
| |
| ▲ | inciampati 14 hours ago | parent | prev | next [-] | | I prefer to think of it as a reference in the Torment Nexus. | |
| ▲ | heavyset_go 14 hours ago | parent | prev [-] | | It would also work as a jab at Roko's basilisk |
|
|
| ▲ | oofbey 13 hours ago | parent | prev | next [-] |
| Actually “last year: this”. It was published in early 2025. |
|
| ▲ | inetknght 14 hours ago | parent | prev [-] |
| Arguments against call it immoral, while counter-arguments call it "legitimate". Meanwhile, three-time Billionaire claims he's solved the problem using soylent green while fifty thousand people react in awe at the live presentation. |