How do infants recognize songs?

Together with Ivonne Weyers and Jutta Mueller from the Psycholinguistics Laboratory Babelfisch, Anja-Xiaoxing Cui is investigating how infants recognize songs, via their melodies or their lyrics. Using EEG and fNIRS alongside an eyetracker we will be able to assess both infants' brain activity together with their behaviour. If you are interested in participating with your infant in experiments, you can sign up for the participant database of the Psycholinguistics Lab Babelfisch and the Wiener Kinderstudien of the University of Vienna.

Living in a box

In a longterm joint project together with Vito Giordano and Lisa Bartha-Doering (Developmental Cognitive Neuroscience Lab, University Hospital Vienna), Philipp Deindl (University Hospital Hamburg), Matthias Bertsch (Motion-Emotion-Lab, MDW), Christoph Reuter, Isabella Czedik-Eysenberg, Matthias Eder and Felix Klooss measure the sound environment of incubators and its impact on the hearing ability and language acquisition ability of preterm infants. Together we measured the numerous acoustic parameters in the NICU environment as well as the audio features which could be important for an acoustic pain detection in term newborns, we created a 360° VR experience in which you can enter into the box of an incubator with the help of VR glasses, we investigated the sound levels and spectral distribution of various ventilation noises, etc.

Unveiling the Mystery of Harmony

The idea that harmony is separable from other musical elements, such as rhythm or timbre has been a key assumption in Western music theory, education, and cognition research. While convenient and maybe useful, the notion of separability may be detrimental, as it neglects the complex integration of these elements in musical perception. Isabella Czedik-Eysenberg and Christoph Reuter collaborate with Ivan Jimenez, Tuire Kuusi and Juha Ohala at Sibelius Academy, Uniarts Helsinki and in a larger international research team to investigate how extra-harmonic features like timbre affect the perception of harmony and how this is modulated by familiarity and expertise. In this endeavour, we explore e.g. which factors contribute to the identification of songs from their piano-driven opening chords or the effect of timbre on Leman's model of Periodicity Pitch. Please contact us if you are interested in participating in any current experiments!

More details on the research project

How Anatomy Influences the Human Voice...

The underlying connections between the vocal anatomy and to sound it produces are complex. In ongoing experiments and sound analyses, Marik Roos, Veronika Weber, Christoph Reuter and Isabella Czedik-Eysenberg explore timbre changes as results of different methods of surgical alteration to the vocal tract. By comparing five different methods of Voice Feminization Surgery (CTA, FemLar, FLT, VFSRAC, and VFW), connections between certain changes in psychoacoustic parameters (such as formant structures and Mel Frequency Cepstral Coefficients) and specific alterations to the human physiology (such as decreasing cartilage mass, narrowing the larynx, as well as position and and method of shortening of the vocal folds) could already be found.

Singing-based training for musical memory in Alzheimer's Disease

Together with Lola Cuddy at Queen's University, Canada, and Ashley Vanstone at the University of Bath, UK, and as part of the M4M research team including Christian Gold at the Norwegian Research Center, Norway, and Marcela Lichtensztejn at the Universidad de Ciencias Empresariales y Sociales, Argentina, Anja-Xiaoxing Cui explores engagement with music in Alzheimer's Disease and whether this engagement and a singing-based training program may stimulate musical memory. We use novel methods to explore different paths to music engagement and are currently running an EEG study at the MediaLab to see whether we can find a neural signature of musical memory. Stay tuned as we discover more about musical memory in Alzheimer's Disease!

Singing in your brain and with your heart

As part of the Peter Wall Opera Project team, Anja-Xiaoxing Cui collaborates with Lara Boyd, Nancy Hermiston, Negin Motamed Yeganeh, and Janet Werker at the University of British Columbia, Vancouver, as well as Sarah Kraeutner at the University of British Columbia, Kelowna, on a number of studies are dedicated to understanding how the brain changes in response to singing training, and what influences singers' cardiac activity while they are on stage. We ask for example: Which brain areas are functionally connected and how does this correspond to an individual's singing abilities? Or: How do singers' cognitive skills influence their cardiac activity during a performance? And how do performance parameters themselves influence their cardiac activity? For more research outputs, have a look at the SInES team's publications page!

If Instruments could talk...

Evaluation of the perceived vowel similarity of musical instruments: an online listening test by Christoph Reuter and Isabella Czedik-Eysenberg together with Charalampos Saitis (Digital Music Processing, Queen Mary University of London) and Kai Siedenburg (Communication Acoustics, TU Graz):
In an online experiment, German native speakers listen to the sounds of oboe, clarinet, flute, bassoon, trumpet, trombone, French horn, tuba, violin, viola, cello, and double bass in three registers and two dynamic levels. Their task is to assign the following vowels and umlauts (in German pronunciation) to instrument sounds: A, Å, E, I, O, U, Ä, Ö, and Ü. Furthermore, participants rate the strength of vowel similarity. Preliminary analyses (of n=64 participants) suggest that vowel similarities could indeed be found for woodwind and brass instruments, as well as for violin and viola, while the question of the vowel similarities of trombone, violoncello and double bass is still open.

Audio-textual Correlations in Popular Music

Together with Oliver Wieczorek (INCHER, University of Kassel), Isabella Czedik-Eysenberg and Christoph Reuter are interested in unveiling the systematic relations between textual and acoustic features in popular music corpora by combining methods of quantitative text analysis and audio feature extraction. Together with Arthur Flexer (Johannes Kepler University Linz, Austria), we study textual topics found in metal music and how they relate to the quality of hardness/heaviness of the music. Further, we study how high-level audio features and textual topics are correlated in popular music and how this might relate to the popularity of songs, as well as how textual content as well as emotional mood of songs might have changed during the COVID pandemic.

For Whom the Bell Tolls

Together with Michael Plitzner and Andreas Rupp (European Competence Centre for Bells, ECC-ProBell®Christoph ReuterMarik RoosSaleh Siddiq and Isabella Czedik-Eysenberg working on a model of the perceived quality and pleasantness of church bell chimes. With the help of audio signal analysis we are looking for an early crack detection in church bells as well as for the preference for bell sounds depending on the origin of the listener. At the turn of the year (2023 to 2024) Michael Plitzner and Christoph Reuter put the historic bell tower of the Vienna St. Stephen's Cathedral and its bells online as a 360° application (with VR glasses it is also visible in 3D). If you move through the belfry and click on the bells, you will get the respective attachment points for a bell test. Of course, the acoustic measurement of the Pummerin should not be missing (360° video by Matthias Bertsch).

Extreme Metal Vocals

Together with Eric Smialek and Jan Herbst at University of Huddersfield, UK, Isabella Czedik-Eysenberg explores extreme metal vocals in regards to their musical expression, technique and cultural meaning. Via audio feature analysis and perceptual experiments, we try to map the acoustic and semantic space of extreme singing styles and effects.
If you want to support our research, please listen to snippets of Metal vocals and take part in our currently running associations task!

Predictive Processing of Music

After successfully operationalising the construct of Predictive Processing (resp. the impairment of that cognitive ability as an underlying cause of neurodiverse conditions such as ADHD and Autism Spectrum Condition) for introspective measurement, this project currently investigates different kinds of activities in the Locus Coeruleus-Norepinephrine System during music reception, since differences in these activities are not only linked to neurodiversity but also an impaired predictive processing as the result of different amounts of available information (e. g. musical experience and education). If you are interested in participating in our current Eye Tracking study, please contact Marik Roos.