Backup image neuromusic website.jpg

Don’t Think. Feel.

-Bruce Lee

NeuroMusic

Clip from 2024 NeuroMusic Proof-of concept concert at Paine Hall, Harvard University

Adding low-frequency energy to the NeuroMusic system through body movement

What is NeuroMusic?

Neuromusic is a creative method that uses present-day neurotechnology and generative electronic composition techniques to explore a much older idea: That the living body/mind/spirit is animated by rhythms in an elusive form of energy.

This form of energy is known to many cultures, by names including Qi, Prana and (arguably) Ache and Mana. It has long been considered incompatible with Western/Colonial cosmology, but that is changing as secularization and advances in computing enable a shift towards “systems” in some corners of the academy, and of the culture at large.

While Acupuncturists, Neuroscientists and other specialists engage with various aspects of life-energy in their professional work, the specialized training and equipment used in these fields poses a major barrier to entry vis-a-vis the general public.

Neuromusic aims to change this dynamic by creating spaces in which the rhythm and harmony of life-energy is made accessible to everyday listeners.

How does NeuroMusic Work?

The electromagnetic signals (biopotentials) captured by EEG (ElectroEncephaloGram) and other Neuro-electric devices are similar in nature to the core elements of electronic music. Their raw form, often referred to as Time Series Data, is a measurement of positive and negative charge distributed over time. This is the same way of encoding information that is used in Audio, which is generated by fluctuations in voltage generated by a microphone or transducer.

Neurotechnology devices like the EEG Cap in this demonstration measure biopotentials, which are fluctuations in voltage generated by the nervous system (including the brain) and then “captured” on the surface of the skin using electrodes (1). These fluctuations create electromagnetic signals that can be amplified and configured so that they are compatible with various different types of electronic musical instruments. This Demonstration uses the Max MSP visual programming language, as well as the Moog Mother-32 analog synthesizer module. Pre-processing is performed using OpenBCI’s open-source GUI software (2).

The waveforms generated by the nerves and brain and measured by neurotechnology have most of their energy concentrated in the sub-audio range, meaning that if they were used to drive a speaker directly, they would move the element too slowly to produce sound.

As such, the raw signals have limited utility for the creation of music. They can be used as modulator waves, with their voltage levels controlling various parameters of an electronic instrument. However, the complexity of raw Time Series data is such that direct modulation by them often produces results that are noisy, confusing and difficult to decipher for listeners. Fortunately, the Neuroscience community has established standard ways of analyzing Biopotential Data,

One popular and useful metric used for EEG signals is Band Power. This metric takes the incoming time-series data and runs it through a set of band-pass filters that are tuned to specific frequencies. Their outputs are then measured, providing a dynamic picture of the relative peak levels at each frequency at any given time. This operation is related to Fast Fourier Transform (FFT) and Vocoding, but is much less energy-intensive, since usually only five frequency bands are being measured at a time. The most commonly measured bands are Delta (4 Hz and below) Theta (4 to 8 Hz) Alpha (8 to 13 Hz) Beta (13-32 Hz) and Gamma (32 Hz and above) (3). The Neuroscience community has studied the cognitive, affective, physical and chemical correlates and associations of these frequency bands and relationships between them very extensively. Some studies, Such as those linking elevated Alpha levels to high cognitive performance (4) or band-specific desynchronization in mental illness (5, 6) have captured the public imagination.

Here’s a video featuring a bare-bones musical application of Band Power data. It’s stripped down so that the incoming Band Power readings modulate the timbral parameters on a repeating 12-note synth sequence, including filter cutoff, envelope attack/decay, octave and overdrive. It’s accompanied by a verbal explanation of how the system functions, and a very quick overview of why Band Power is of interest from a musical perspective

Partly due to the quantitative, analytical nature of high-profile neuroscience studies, public perception tends to characterize the use of biopotential data as part of a push to quantify the inner workings of the human mind, There are definitely some people who are trying to do just that (more on this below). Unfortunately for them, even with the help of future AI models that are vastly more powerful (not to say socially and environmentally destructive) than the ones available today, there’s no reason to believe that Brain Data or any other kind of electronically mediated information will ever have the ability or be animated by the motivation to “discover” any of the things that matter most to us, subjectively, as humans. This is because even in cases where machines exceed the human capacity for data capture and analysis, the biological architecture of human experience is specific. We do not hear, see, or otherwise perceive things exactly “as they are” from the perspective of a machine. Rather, we are constantly synthesizing, making inferences and mistakes, discarding information and manufacturing narratives about our sensory inputs even before those inputs reach the parts of our nervous system associated with thought. If we were to build a machine that engaged with and understood the world in exactly the same way as we do, we would have wasted our energy. All we would have created is another human, and there are already a lot of us running around.

Why NeuroMusic?

If the goal of neuroscience and neurotechnology is the quantification of the human experience, then there is no point in making neuromusic. However, if the goal is to use the tools created by neuroscience and neurotechnology to facilitate communication and understanding between people, then neuromusic is an accessible and productive way forward. One way to illustrate this is through the concept of neural resonance.

Neural Resonance refers to the synchronization of neural oscillations (such as the biopotential measured by EEG) between multiple people. Anyone who has played in a band, danced, tapped their foot or nodded along with music, done karaoke or actively engaged with music in a social setting has experienced neural resonance, and this lens on music has been explored from a neuroscientific perspective (7). Neural resonance is also associated with quieter, more internal aspects of human connection. For example, the neural resonance metric known as the Mu Rhythm (more specifically, sympathetic Mu-rhythm suppression measurement for motor resonance) has been associated with trait empathy, This suggests that the feeling of empathy toward someone is accompanied by increased neural resonance with that person, like a sympathetic string vibrating in harmony with a nearby string tuned to the same note (8). Professor Jeniffer Gutsell, who runs the Social Interaction and Motivation lab at Brandeis University, has found in multiple studies that group biases such as racial prejudice are associated with a decrease in Mu-Rhythm resonance (9, 10, 11) meaning that individuals indoctrinated with racist beliefs have a reduced ability to synchronize their brain activity with people against whom they are prejudiced. This effect can attenuated through the conscious cultivation of an empathetic mindset (12), through the act of mimicking members of a stigmatized out-group (13) and, implicitly, by instilling prejudiced individuals with a consciousness of out-group-members’ humanity (14).

NeuroMusic opens up a new and exciting way to explore Neural resonance. Much of its specific potential is related to the phenomenon of neural entrainment, Neural Entrainment is closely related to neural resonance, and describes a phenomenon by which the rhythms in the brain become synchronized with sensory stimuli. Entrainment can occur in response to auditory or visual rhythmic stimuli (15) and even in response to imagined music (16). Entrainment to auditory stimuli is further enhanced by the Frequency Following Response (FRR) which relays analog voltage transductions from the cochlea directly along the signal pathway to the auditory cortex (17). The tonotopic structure of the cochlea (meaning that different portions of the organ vibrate in sympathy to different frequencies within the audible spectrum) means that neural entrainment to auditory stimuli such as music is near-instantaneous (18) and has a much higher resolution than that produced by technological means of sound capture and analysis (19).

An early version of NeuroMusic, exploring neural responses to online hate speech. WARNING: this piece contains direct references to Anti-Asian hate and violence.

The demonstration at left shows how NeuroMusic employs the principle of Data sonification (20) to create sounds that feel “musical” while also preserving and relaying the energetic properties of the EEG data from which they are derived. In this case, the synthesizer plays a melody that follows the contours of the incoming Alpha Band power, the rhythm of which is constrained to speeds that fall within the Alpha frequency band itself. This means that the shape and rhythm of the brain wave inputs from EEG are translated into sound waves which, despite being organized into a musical melodic line, have the ability to transmit the rhythms the original brain waves to anyone who happens to be listening. Due to the phenomenon of auditory neural entrainment, these sound waves have the potential to carry the energy from the EEG subject’s brain directly into the listener’s Brian by way of the auditory cortex. This may represent a new path towards the cultivation of neural resonance through music, with exciting implications for the aesthetically mediated cultivation of empathy and attenuation or eradication of prejudice.

Future Directions

NeuroMusic has its roots in experimental music practices dating back to Alvin Lucier’s Music for Solo Performer and, more recently, the innovations of professor Grace Leslie. My practice of NeuroMusic is developing in several different directions, thanks to collaborations with composer Derrick Skye and computational Neuroscientists Ying Wu and Enrique Carillosulub. NeuroMusic will be featured in the forthcoming opera Song of the Ambassadors (2025) and in Sounding Psychedelia, an experimental performance in which EEG from a Psychedelic therapy session will be recorded and used to generate an evening-length NeuroMusic concert program. Sounding Psychedelia will be presented at Harvard University on November 16th 2024, as part of the Mahindra Humanities Center’s Psychedelics in Society and Culture initiative.

References

  1. Palumbo, A., Vizza, P., Calabrese, B., & Ielpo, N. (2021). Biopotential Signal Monitoring Systems in Rehabilitation: A Review. Sensors (Basel, Switzerland), 21(21), 7172. https://doi.org/10.3390/s21217172

  2. The openbci Gui: Openbci documentation. OpenBCI Documentation RSS. (2023a, September 20). https://docs.openbci.com/Software/OpenBCISoftware/GUIDocs/

  3. Nayak CS, Anilkumar AC. EEG Normal Waveforms. [Updated 2023 Jul 24]. StatPearls (Treasure Island, FL): StatPearls Publishing 2024. https://www.ncbi.nlm.nih.gov/books/NBK539805/#

  4. Zoefel, B., Huster, R. J., & Herrmann, C. S. (2011). Neurofeedback training of the upper alpha frequency band in EEG improves cognitive performance. NeuroImage, 54(2), 1427–1431. https://doi.org/10.1016/j.neuroimage.2010.08.078

  5. Hinrikus, H., Suhhova, A., Bachmann, M., Aadamsoo, K., Võhma, Ülle, Pehlak, H., & Lass, J. (2010). Spectral features of EEG in depression. Biomedizinische Technik, 55(3), 155–161. https://doi.org/10.1515/bmt.2010.011

  6. Yeh, T.-C., Huang, C. C.-Y., Chung, Y.-A., Park, S. Y., Im, J. J., Lin, Y.-Y., Ma, C.-C., Tzeng, N.-S., & Chang, H.-A. (2023). Resting-State EEG Connectivity at High-Frequency Bands and Attentional Performance Dysfunction in Stabilized Schizophrenia Patients. Medicina (Kaunas, Lithuania), 59(4), 737. https://doi.org/10.3390/medicina59040737

  7. Large, E. W., & Snyder, J. S. (2009). Pulse and Meter as Neural Resonance. Annals of the New York Academy of Sciences, 1169(1), 46–57. https://doi.org/10.1111/j.1749-6632.2009.04550.x

  8. DiGirolamo, M. A., Simon, J. C., Hubley, K. M., Kopulsky, A., & Gutsell, J. N. (2019). Clarifying the relationship between trait empathy and action-based resonance indexed by EEG mu-rhythm suppression. Neuropsychologia, 133, 107172–107172. https://doi.org/10.1016/j.neuropsychologia.2019.107172

  9. Derks, B., Gutsell, J., Scheepers, D., Ellemers, N., & Inzlicht, M. (2013). Using EEG mu-suppression to explore group biases in motor resonance. In B. Derks (Ed.), Neuroscience of prejudice and intergroup relations (pp. 278–298). Psychology Press.

  10. Gutsell, J. N., & Inzlicht, M. (2012). Intergroup differences in the sharing of emotive states: neural evidence of an empathy gap. Social Cognitive and Affective Neuroscience, 7(5), 596–603. https://doi.org/10.1093/scan/nsr035

  11. Simon, J. C., & Gutsell, J. N. (2021). Recognizing humanity: dehumanization predicts neural mirroring and empathic accuracy in face-to-face interactions. Social Cognitive and Affective Neuroscience, 16(5), 463–473. https://doi.org/10.1093/scan/nsab014

  12. Gutsell, J. N., Simon, J. C., & Jiang, Y. (2020). Perspective taking reduces group biases in sensorimotor resonance. Cortex, 131, 42–53. https://doi.org/10.1016/j.cortex.2020.04.037

  13. Inzlicht, M., Gutsell, J. N., & Legault, L. (2012). Mimicry reduces racial prejudice. Journal of Experimental Social Psychology, 48(1), 361–365. https://doi.org/10.1016/j.jesp.2011.06.007

  14. Simon, J. C., & Gutsell, J. N. (2021). Recognizing humanity: dehumanization predicts neural mirroring and empathic accuracy in face-to-face interactions. Social Cognitive and Affective Neuroscience, 16(5), 463–473. https://doi.org/10.1093/scan/nsab014

  15. Comstock, D. C., Ross, J. M., Balasubramaniam, R., & Foxe, J. (2021). Modality‐specific frequency band activity during neural entrainment to auditory and visual rhythms. European Journal of Neuroscience, 54(2), 4649–4669. https://doi.org/10.1111/ejn.15314

  16. Okawa, H., Suefusa, K., & Tanaka, T. (2017). Neural Entrainment to Auditory Imagery of Rhythms. Frontiers in Human Neuroscience, 11, 493–493. https://doi.org/10.3389/fnhum.2017.00493

  17. Lehmann, A., Arias, D. J., & Schönwiesner, M. (2016). Tracing the neural basis of auditory entrainment. Neuroscience, 337, 306–314. https://doi.org/10.1016/j.neuroscience.2016.09.011

  18. Celesia, G. G., Hickok, G., & Afra, P. (2015). The human auditory system : fundamental organization and clinical disorders (Third series.). Elsevier.

  19. Celesia, G. G., Hickok, G., & Afra, P. (2015). The human auditory system : fundamental organization and clinical disorders (Third series.). Elsevier.

  20. NASA Space Physics Data Facility definition of Data Sonification https://spdf.gsfc.nasa.gov/research/sonification/

Here’s a video featuring a bare-bones musical application of Band Power data. It’s stripped down so that the movement of energy is easy to hear, and it’s accompanied by a verbal explanation of how the system functions.

This video shows the first functional implementation of the EEG patch as a generative medic voice. Phrase structure has not yet been implemented in any significant way in this version of the patch. However, the scale used allows for a certain amount of natural lyricism, which is attributable to the energy patterns inherent in the EEG stream rather than to the processing algorithm.

Here’s a video featuring a bare-bones musical application of Band Power data. It’s stripped down so that the movement of energy is easy to hear, and it’s accompanied by a verbal explanation of how the system functions.

This video demonstrates a different Max MSP patch, which contains a generative Gamelan instrument which I designed using the motion-capture midi transmission method shown here. My intention for this project is to repurpose this instrument as a generative rhythmic texture, sub titling EEG streams for the MIDI inputs generated by the handheld MicroBits I this demonstration. The overall texture will be accompanied by a melodic voice and a small ensemble of live musicians. Please excuse my staring eyes inn this video. The computer, caffeine, and a general Frankenstein vibe all contributed to this unfortunate look.