Remix.run Logo
adamzwasserman 3 days ago

1. 'Probably just an illusion' is doing heavy lifting here. Either provide evidence or admit this is speculation. You can't use an unproven claim about consciousness to dismiss concerns about conflating it with text generation.

2. Yes, there are documented cases of people with massive cranial cavities living normal lives. https://x.com/i/status/1728796851456156136. The point isn't that they have 'just enough' brain. it's that massive structural variation doesn't preclude function, which undermines simplistic 'right atomic arrangement = consciousness' claims.

3. You're equivocating. Humans navigate maps built by other humans through language. We also directly interact with physical reality and create new maps from that interaction. LLMs only have access to the maps - they can't taste coffee, stub their toe, or run an experiment. That's the difference.

KoolKat23 3 days ago | parent [-]

1. What's your definition of consciousness, let's start there. 2. Absolutely, it's a spectrum. Insects have function. 3. "Humans navigate maps built by other humans through language." You said it yourself. They use this exact same data, so why won't they know it if they used it. Humans are their bodies in the physical world.

adamzwasserman 3 days ago | parent [-]

1. I don't need to define consciousness to point out that you're using an unproven claim ('consciousness is probably an illusion') as the foundation of your argument. That's circular reasoning.

2. 'It's a spectrum' doesn't address the point. You claimed LLMs approximate brain function because they have similar architecture. Massive structural variation in biological brains producing similar function undermines that claim.

3. You're still missing it. Humans use language to describe discoveries made through physical interaction. LLMs can only recombine those descriptions. They can't discover that a description is wrong by stubbing their toe or running an experiment. Language is downstream of physical discovery, not a substitute for it

KoolKat23 2 days ago | parent [-]

1. You do. You probably have a different version of that and are saying I'm wrong merely for not holding your definition.

2. That directly addresses your point. In abstract it shows they're basically no different to multimodal models, train with different data types and it still works, perhaps even better. They train LLMs with images, videos, sound, and nowadays even robot sensor feedback, with no fundamental changes to the architecture see Gemini 2.5.

3. That's merely an additional input point, give it sensors or have a human relay that data. Your toe is relaying it's sensor information to your brain.