top of page
Balafons.jpg

NeuroSync: Music and Hyperscanning

This research uses an experimental approach to investigate whether two people making music together may lead to the neurons in their brains firing at the same rate (neural coupling), and whether this may have links to evidence of social bonding.

Summary of research

This project team has already conducted a pilot of this methodology with 24 participants (and seed funding from 3 small research funds), and have successfully collected data and undertaken initial analysis on this, to ensure that the paradigm proposed is feasible. Essentially, this ambitious project takes a novel approach to exploring whether making music together can lead to social bonding and cohesion.

 

Both music and language are cornerstones in human expression, but their interconnectedness and mutual influence are still relatively unexplored. A study by Robledo and colleagues (2021), highlighted how collaborative musical improvisation between individuals (who were not musicians) increased their speech and movement coordination during subsequent verbal conversations. Through various recordings, including motion capture, audio, and video, the research revealed that pairs of unfamiliar individuals experienced greater motor coordination after engaging in musical improvisation compared to similar pairs involved in a non-musical joint motor task. The significant convergence between motor and vocal behaviours hints at shared mechanisms facilitating this cross-domain influence where music impacts language.

 

This study aims to delve deeper into the underlying mechanisms that drive physical and speech cohesion, particularly focusing on potential neural oscillators. Previous research has shown that speech and motor synchrony can be characterised by corresponding synchronisation of brain waves between interactants (Kawasaki, Kitajo, & Yamaguchi, 2018; Koul et al., 2023). Similarly, musicians have been shown to exhibit synchronous neural activity when playing together (Müller & Lindenberger, 2023; Zamm, Palmer, Bauer, Bleichner, Demos, & Debener, 2021). Such work demonstrates the application of EEG hyperscanning (scanning two brains at once) to study either expert musical interaction or speech. However, to our knowledge, no study has yet had the ambition of exploring the role of neural oscillators in both non-expert spontaneous (i.e., not scripted, score-based) music-making and spontaneous conversation at the same time nor, as a consequence, explored their interrelation. By incorporating a simultaneous electroencephalography (EEG) recording technique known as "hyperscanning" with Robledo et al.,’s 2021 paradigm, the project aims to investigate the shared synchronous neural and psychological components of music and speech, assessing whether the level of synchronisation between neural oscillators aligns with previous behavioural observations.

 

This project will contribute significantly to the empirical examination of neural oscillators in non-expert joint musical improvisation. By comparing conversational synchrony across multiple domains before and after a joint task (musical improvisation or a collaborative creative task) we will be able to effectively investigate the uniqueness of music as a social facilitator. Additionally, the research seeks to enhance our comprehension of the intricate relationship between neural oscillators and vocal prosody (we will measure patterns of brain activity during music making and during conversation). Should the data align with expectations, it could challenge prevailing notions about transfer effects, potentially reshaping our understanding of music and language as discrete domains with separate influences.

bottom of page