Remix.run Logo
apolloartemis 17 hours ago

If this were true wouldn’t fMRI machines cause either loss of consciousness or extreme hallucinations?

ggm 17 hours ago | parent | next [-]

I believe in dead salmon, they do.

exe34 16 hours ago | parent | next [-]

Thank you for the giggle, I misread this as a statement of faith and a non-sequitur.

moffkalast 14 hours ago | parent | next [-]

I had an fMRI and also believe in dead salmon now, it's a common side effect but it's worth it for the diagnostic data they get.

oniony 13 hours ago | parent | prev [-]

Yeah, really needed the comma on the left side of the parenthesis.

lgas 17 hours ago | parent | prev [-]

They cause hallucinations in dead salmon? I find that hard to believe.

ggm 17 hours ago | parent | next [-]

https://www.scientificamerican.com/blog/scicurious-brain/ign...

lgas 16 hours ago | parent [-]

I'm not 100% sure I'd call that a hallucination, but it's close enough and interesting enough that I'm happy to stand corrected.

bitwize 16 hours ago | parent [-]

When improper use of a statistical model generates bogus inferences in generative AI, we call the result a "hallucination"...

baq 13 hours ago | parent [-]

It should have been called confabulation, hallucination is not the correct analog, tech bros simply used the first word they thought of and it unfortunately stuck.

K0balt 11 hours ago | parent [-]

Undesirable output might be more accurate, since there is absolutely no difference in the process of creating a useful output vs a “hallucination” other than the utility of the resulting data.

I had a partially formed insight along these lines, that LLMs exist in this latent space of information that has so little external grounding. A sort of deeamspace. I wonder if embodying them in robots will anchor them to some kind of ground-truth source?

furyofantares 17 hours ago | parent | prev [-]

Loss of consciousness seems equally unlikely.

lgas 17 hours ago | parent [-]

True, though an easier mistake to make, I imagine.

rdgthree 9 hours ago | parent | prev [-]

Not necessarily - I think it works like Daniel Kahneman's System 1 and System 2. Your conscious system is System 2 - when it's not working correctly, you just fall back to System 1.

Independently, since the whole idea relies on resonance, it may be the case that an fMRI doesn't actually interfere with the "stochastic resonance" mechanic quite like TMS (transcranial magnetic simulation) seems to.

If you model the brain this way, dementia looks like a clear breakdown of System 2, which is an interesting thought experiment even if the mechanics aren't perfect: https://1393.xyz/writing/alzheimers-is-the-symptom-not-the-p...

neuah 7 hours ago | parent [-]

You know the mechanism of TMS is not mysterious. It requires no magnetoreception or "stochastic resonance". It is simply inducing electrical currents to modulate neural activity. Its effects are consistent with the known laws of physics, known properties of neurons, and decades of neuroscience research.

rdgthree 5 hours ago | parent [-]

Of course!

But also:

> Although the biology of why TMS works isn't completely understood, the stimulation appears to affect how the brain is working.

https://www.mayoclinic.org/tests-procedures/transcranial-mag...

I think it's reasonable to assume there's room to sharpen our understanding of it quite a bit.

neuah 5 hours ago | parent [-]

I think you're conflating one question with another. The "why" in question is why altering neural activity in that way results in clinical effects. It is not the "why" TMS alters neural activity.

rdgthree 3 hours ago | parent [-]

I appreciate that you feel this way, but the mechanisms behind exactly which neural circuits are activated by TMS are simply not yet fully understood.

From 2024:

> Transcranial magnetic stimulation (TMS) is a non-invasive, FDA-cleared treatment for neuropsychiatric disorders with broad potential for new applications, but the neural circuits that are engaged during TMS are still poorly understood.

[0]https://journals.plos.org/ploscompbiol/article?id=10.1371%2F...