Remix.run Logo
How a dancer with ALS used brainwaves to perform live(electronicspecifier.com)
33 points by 1659447091 5 hours ago | 3 comments
usui 7 minutes ago | parent | next [-]

The featured video does not explain how it uses signals to produce which outcomes and they basically just say "we use machine learning while outputting a dance". At 07:10 it looks like the person chooses between two binary options of "sad or relieved". Unfortunately I doubt the person has much real-time input to the live performance as much as it is being claimed. Dentsu is also an advertisng company in Japan, so it seems like this is more marketing than it is technical.

Dances by physical humans are always choreographed beforehand but live performances always show physical motion that can interrupt at any time. I have a hard time believing that this person's brainwaves are producing the 3D hologram, other than instructing it which mood preset to use at a given time.

MajorTakeaway 4 hours ago | parent | prev [-]

Now is a really good time to contribute to https://openeeg.sourceforge.net/doc/ as far as EEG concerns go. There are a myriad of things that can be observed with EEG, and it would honestly be a decent thing to see grow in time.

EEBio 9 minutes ago | parent [-]

There is quite a number of freely available EEG software for different paradigms (one such collection is MOABB - Mother of All BCI Benchmarks, and there’s a huge number of scientific articles).

The biggest bottleneck for a hobbyist is that when using EEG, most paradigms require somewhat expensive hardware to work and that most paradigms still don’t work well with scalp recordings outside a lab environment, even when using mid-cost devices.

There’s also the issue that classifiers usually have to be quite simple because datasets are small, because they are time consuming to record (and after you remove noisy epochs, you have even less data left). Cross-session and cross-subject learning rarely works, since EEG is dependent on so many factors like subjects’ brain anatomy, the type and precise location of electrodes, amount of gel (or lack thereof) and how dried out it is, mood and focus of the subjects, a huge number of environmental factors that influence subjects’ focus and many others.

The only paradigm I have seen to work a bit more reliably than others is Steady State Visual Potentials, because you have extra information that doesn’t need to be learned from EEG (the frequency of visual stimuli is roughly the same as the one in subjects’ occipital lobe).