Psychology, in conjunction with neuroscience and computational modelling, has helped us understand how the brain processes music and why we love it so much.
March of 1985 was a turning point in the life of renowned British musicologist, conductor and pianist Clive Wearing. At the peak of his musical career, he contracted a virus that infected his brain, giving him chronic retrograde and anterograde amnesia. Wearing’s short and long term memory were nearly destroyed, leaving him conscious for only 7 to 30 seconds at a time before forgetting each moment perpetually since the misfortune. But there was a silver lining: When Wearing went to play or conduct music from his previous life without amnesia, he could do so with remarkable recall and sustained consciousness — although the memory of the performance would be lost in the following moments.
Most of us will never reach Wearing’s level of musical literacy, but we might agree that music is central to our lives; our most cherished moments and emotional states are often accompanied by melody. That is not to say that our individual preferences aren’t idiosyncratic; we don’t always know why we like particular sounds, instead asserting about our favourite music: ‘It just sounds good’. Wearing’s case highlights the psychological and neurological underpinnings of this age-old art. Along with the insights of computational neural modelling, we can begin to demystify why we are so profoundly moved by the full spectrum of musical sound.
When defining music, a first and crucial consideration seems to be the process of repetition. Elizabeth Margulis, Professor of Music Cognition at the University of Arkansas, observes in her book On Repeat: How Music Plays the Mind that, while repetition is the acknowledged defining feature of music shared by all cultures, historically it has been disregarded. Musical repetition was once even described as “the emanation of a disordered brain,” a feature not tolerated in other expressive forms. But whereas language is crucial to human identity and communication, says Margulis, music is connected to pleasure circuitry in the brain that evolved for other purposes.
In fact, repetition is also a central idea to a currently favoured theory of how the mind works: the ‘Bayesian brain’ model. The idea is that the mind makes calculated assumptions about a given situation based on previous experiences and the probabilistic rules of inference. This concept of a previous instance shaping how we judge subsequent events is called a ‘prior’.
A memory for a piece of music can be thought of as a pattern representation kept in the brain; the more we encounter that given combination or pattern of sensory events, the stronger the neural associations which represent them. In the first instance, we process incoming sound with echoic memory that is fleeting, but, through repetition and exposure, a piece of music can be stored in our memory long term as an engram. This can be thought of as a physical representation or ‘trace’ of the sensory experience within the actual neural architecture of the brain. Going back to the case of Wearing, after years of musical practice he became so well-rehearsed that his musical knowledge became implicit memory — a kind of deeply encoded unconscious process not unlike writing or riding a bike for the typical adult.
Consider this research recently conducted by MIT: A study found that the brain shows a bias for rhythms consisting of simple integer ratios, irrespective of our cultural origins. In the study, Westerners unconsciously preferred simple patterns such as a four beat combination separated by equal time intervals (for example, a 1:1:1 ratio). Another participating group, the Tsimané tribe in Bolivia (being culturally distinct because of their limited Western exposure), also preferred simple integer ratio rhythms, but in distinct combinations. The Western group had an unconscious bias for rhythms most often heard in Western music, and the Bolivian group preferred the distinct rhythms more frequently found in their own music.
The difference in preferences between the two groups was thought to result from previous exposure within their respective cultures to particular rhythms, which became priors. The clever method used to determine these unconscious biases was to test participants with randomly generated series of four beats while asking them to tap out the rhythms they heard. After their tapping responses were recorded enough times, the researchers could take averages of the participants’ responses to infer their internal bias towards a particular rhythm.
Our experience inevitably leads to musical biases at a fundamental level. In fact, within Western culture the tuning of musical instruments is often taken for a given, although it is not in fact universal. Western instruments such as the piano are divided into octaves which adhere to a chromatic scale of twelve pitches, each a semitone above or below the next. In contrast, Persian and Arabic musical styles will often contain note intervals halfway between the semitone — the quarter tone — which makes for a distinctive sound.
Given our proneness to unconscious influence, there may be good reason why simplistic, repetitive pop tunes become earworms stuck in our heads. The simplest repeating patterns are likely the ones we have encountered more, and that helps reinforce them in our minds. We are more likely to preference that which is more familiar to us, a psychological process known as the ‘mere-exposure effect’. This natural habit could help explain why popular music has become increasingly repetitive over the past several decades — and, in fact, data from music apps can now determine what will be most popular. It may seem less surprising, then, that repetitive lyrics and beats in music increase our processing fluency, the facility with which we can take in and later recall the information.
Adolescence is a critical period for developing lifelong musical preferences. This is partly because the brain’s emotional processing and reward centre, the limbic system, is developing at this age — well before the higher, abstract regions of the brain. This means that intense connections between emotion and music are being made. The brain undergoes massive structural changes between the ages of 12 and 16.5, during a developmental stage of ‘synaptic pruning’. In this process, the total number of neural connections are pared back and those which are most active are prioritised for the projected needs of adulthood based on existing environmental cues.
Computational models of neural networks demonstrate how the brain processes and recalls musical memories and makes associations to them. For example, the Sparse Distributed Memory model can reliably represent how the brain accesses memory by associating one thought pattern to another in a ‘best match’ selection process. We can think of each piece of sound as a ‘bit’ coming in through the auditory pathways, which will generate a particular pattern of neuronal activation. If this pattern is similar enough to one held in memory, it may be recalled and associated. When one thought or idea gets linked to another by association it is sometimes called a Höffding step; this is the process that allows an aficionado of the Beatles to hear “Revolution 9” and immediately make the mental link to The White Album.
We know that the brain identifies and categorises sounds; technology helps us visualise and better understand the way this might happen. For example, bits of acoustic information, such as a singer’s voice or an instrument recorded in a ballad, can be identified and classified with auditory sound analysis. A mathematical conversion known as a Fourier Transform is used to decompose a sound signal into its various frequencies and create a spectral representation of the data, called a voiceprint or spectrogram. Each acoustic combination can then be thought of as a ‘fingerprint’, able to be identified algorithmically and matched to others sharing the same signature features.
This technology has a great number of potential applications relevant to human psychology and society. Newly developed apps such as Beyond Verbal use sound analysis for emotional analytics; vocal intonations can be analysed to determine the emotional state, or even the health status, of the speaker. This supports our long-held intuitive sense that certain pieces of music and sound convey a particular feeling expressed by the performer.
As a species with advanced communication needs, we are highly discriminating to the sound waves we hear within our narrow spectral range. We also differ in how perceptive we are to changes in tone, depending on prior experience, attentional biases and predispositions. Perception is particularly interesting with respect to music. Imagine, for instance, hearing Bob Dylan’s earnest voice make history in his famous song “The Times they are a Changin’”, only to later realise that you can’t recall a word he had sung. Our brains allow us to take in entire moments of music, and yet some features of sound can remain under our threshold of awareness.
It is true that we often associate particular types of music with feelings and memories from our own unique background. But individuals with a rare condition called chromesthesia (a kind of synaesthesia) take this idea a step further: they naturally perceive colour when they hear sound. Many well-known composers and musicians have been inspired chromesthetes: Nikolai Rimsky-Korsakov, Franz Liszt and Olivier Messiaen, to name a few. The ability to perceive colour and sound simultaneously underscores an important point about how the mind interprets sound: Although our perceptions of sounds and colours are indeed real, they are also interpretive projections of the mind about the outside world. What we perceive is actually a transduction of the senses, contingent on a degree of subjectivity.
A fair conclusion about our love for music could be this: We don’t hear a string of bits when we listen to music, but rather we hear a collective movement. That is why, to truly understand the beauty of Nina Simone, we are suspended in sounds moving through time in beautiful, unified harmony. How does the brain bind Nina’s haunting voice to the echoing timbre of the piano, to be much more than a collection of instruments and words, but rather the totally absorbing morose crescendos heard nowhere else quite the same as in ‘Strange Fruit’? This is one aspect to the science of music that has yet to be resolved, although psychology and neuroscience have driven us forward in our understanding.
Herein lies an insight into our very consciousness, and also a suggestion for what it means to be human. Music draws us into that deeply felt but nebulous concept of “I,” a total unity of the senses that trumps our intellectualising. To merely discuss and conjecture about music would be to miss the forest for the trees entirely. In the end, music is an experience, the juncture between logic and soul.
Edited by Andrew Katsis