12. Listening: The Cocktail Party Effect

The brain's ability to focus auditory attention on a particular stimulus while filtering out a range of other stimuli, like background chatter, is an example of the Cocktail Party Effect. 

Top (L-R): Mash [1970] • McCabe and Mrs Miller [1971] Bottom (L-R): Thieves Like Us [1974] • Nashville [1975] dir. Robert Altman

Robert Altman was interested in creating realistic multi-character dialogue within a given scene. His use of multiple radio microphones to create naturally overlapping conversations is well-documented. Less noted is the performance style of many of his actors.

Dialogue parts often appear to lack theatrical performance projection; lines are sometimes mumbled and uttered quietly, mixing with the dialogue of other characters nearby. There is an intended naturalism to these conversations that feels closer to Cinéma Vérité documentary-filmmaking than classical Hollywood style. 

For the modern viewer used to ultra-close and controlled dialogue (a result of practical production sound concerns in commercial filmmaking as much as stylistic convention), the sum effect of this overlapping, under-projected naturalism can lead to a sense of compromised intelligibility - we’re not exactly sure what’s always being said. *

The Altman approach to dialogue naturalism is further extended by his casting of non-professional actors who are less accustomed to theatrical voice projection. Technical enhancements are achieved in-camera and later in post-production; wider ensemble shots are favoured over single character close-ups, while overall dialogue mixing is less concerned with total intelligibility of every uttered word.

* Director Christopher Nolan's so-called ‘punk style’ might be considered a recent exception to the Hollywood drive for ultra-intelligibility, though whether his approach is entirely effective or not for the kind of films he makes, is another question.

11. Listening: The Ventriloquist Illusion

Our senses are connected. They function at the nexus of lived experience; the synergy of mind, body and world. These multiple sensory stimulations are integrated by the nervous system to produce meaningful perceptual experiences. 

David Eagleman writes in his 2011 book Incognito: The Secret Lives of the Brain how the visual field informs our auditory experience:

“The different senses influence one another, changing the story of what is thought to be out there. What comes in through the eyes is not just business of the visual system—the rest of the brain is invested as well. In the ventriloquist illusion, sound comes from one location (the ventriloquist’s mouth), but your eyes see a moving mouth in a different location (the ventriloquist’s dummy). Your brain concludes that the sound comes directly from the dummy’s mouth. Ventriloquists don’t “throw” their voice. Your brain does all the work for them.”

Multisensory illusions like the Ventriloquist Effect and the McGurk Effect - where the auditory component of one sound is paired with the visual component of another sound, leading to the perception of a third sound - suggest that vision greatly influences auditory perception.

10. Listening: The Ear

Sound is a vibration. Sound waves travel through oscillating molecules in a medium like air or water. 

Light reaches the human retina within the 180-degrees forward-facing field of view. Sound reaches the ears from all directions at all times. Artist Christian Marcly once commented:

“I think it is in sound’s nature to be free and uncontrollable and to go through the cracks and to go places where it’s not supposed to go.” 

The eye can shut out external stimuli with the aid of an eye-lid, while the ear remains in a state of permanent receptivity.

9. Listening: Evolution

Humans are accustomed to hearing sounds from everywhere at any time. The sources of these sounds are not always discernible to the eye. 

A developed auditory sense has allowed humans to gather information from their surroundings. Hearing acts as a kind of early-warning system, enabling humans to identify the general direction of a sound and react to it before the need for visual confirmation.

Our causal mode of listening has helped humans evade predators and navigate through hostile environments. Language introduces semantic modes of listening that has supported mankind’s growing need to understand and cooperate with one another in an increasingly complex and socialised world. 

In the course of everyday life these habitual listening modes - the causal and semantic - are often activated and combined simultaneously. Chion writes: “We hear at once what someone says and how they say it“.

8. Listening: Quiet Silence

The quiet can be unsettling, disorientating. The absence of sound can suggest social isolation, remoteness.

Such spaces can reveal the sonic activity already present within them; a naked cough is exposed in the hush of the library, a shuffling of feet suddenly violates the quiet. 

Carnival of Souls [1962] dir. Herk Harvey

Sudden unexpected changes to a sensory input arouse attention. These can manifest as feelings of discomfort and fear. In the 1962 horror film Carnival of Souls, organist Mary Henry emerges from a department store changing room only to discover that the world has suddenly fallen silent. She no longer can hear anything from the environment around her. All is mute. The only sounds she can hear are her own voice and footsteps. Later she reports to the doctor:

"It was more than just not being able to hear anything. Or make contact with anyone. It was though...as though for a time I didn't exist. As though I had no place in the world. No part of the life around me.”

Derealisation is described as an alteration in one's perception of the world. Depersonalisation is an alteration in one’s perception of self, often observing the body and mind from outside at a distance. Such dissociative disorders are ways for the mind to cope with stress and trauma. Manipulation of mental dissociative states through targeted sound design (e.g. changing sound levels, use of silence) is commonly used to heighten subjective states in many genres of narrative filmmaking.

Ikiru [1952] dir. Akira Kurosawa

In Ikiru [1952], director Akira Kurosawa allows the dreaded news of the protagonist’s health to hang in the air in silence. Lost in thought Watanabe leaves the hospital, slowly exiting out on to a city street devoid of all sound. A large truck suddenly passes by, awakening him from his introspection. The cacophony of the city violently returns.

For Alfred Hitchcock, the artificial silencing of a victim at a particular moment operates at the most provocative level - the spectator is affected not by what is seen or heard, but what is imagined. In a famous scene from Frenzy [1972], Hitchcock abruptly cuts the sound of the outside world after Barbara "Babs" Milligan enters the murderer’s flat. As the door closes, the camera smoothly and silently tracks back down the stairs before slowly returning to the bustling city life outside. In this sequence the use of silence over the continuous tracking shot heightens the grim, inevitable fate that awaits Babs. She is alone and helpless. No one outside is aware of what is about to happen. No one, that is, except the spectator, who plays out the scene in their own mind.

Discussing the unique role sound performs in his own equally violent film Benny’s Video [1992], Austrian director Michael Haneke elaborates on this Hitchcock approach:

“With an image, you cut the imagination short. With an image, you see what you see and its 'reality'. With sound, just like words, you incite the imagination. And that’s why for me it's always more efficient, if I want to touch someone emotionally, to use sound rather than image.”

Frenzy [1972] dir. Alfred Hitchcock

7. Listening: Stochastic Resonance

The modern, mobile knowledge worker seeks an ideal workspace away from the isolation and the monotony of the home. For many people such a space is one with a certain optimal level of background noise.

In our city spaces, cafes, galleries and bars, an animated conversation nearby can prove distracting. Attention drifts towards audible phrases and fragments that spill out into one’s own private acoustic zone. Noise-canceling headphones soundproof one from such interference. 

And yet, small amounts of noise can be beneficial for our senses. First discovered in studies of animal behavior, Stochastic Resonance is the phenomenon that describes how sensory signals can be enhanced by certain optimal levels of noise in a system. Studies show that a low-to-moderate level of ambient noise in public spaces like a coffee shop can actually boost abstract thinking and creativity. The variety of visual and auditory stimuli in a public setting can also stimulate creative thinking, while the physical presence of other people can act as a motivating factor to work more effectively.

6. Listening: Head Space

Erik Satie dreamed of his music being played everywhere. Today we listen as we go about our business. 

Portable, pre-recorded audio allows us to “tune” the environment we inhabit. This creates a first-person, user-defined sonic ambience or ‘furniture music’, that can accompany or facilitate other activities that one might be involved in. 

Simon Killer [2012] dir. Antonio Campos

Headphones untether us from fixed speaker systems and computers. They allow us to augment our everyday embodied experience with a private, internal soundtrack.

These private and portable modes of listening can intensify the feeling of one’s own sense of subjectivity, leading to a unified, self-centric ‘soundtrack-to-my-life’ experience. In the 2012 film Simon Killer, the central character wanders the streets of Paris in his own private musical world, his headphone music functioning as a playlist of non-diegetic score for the film.

Sound vibrating in the cavity between the headphone ear-speaker and the ear masks the acoustic activity of the space one physically inhabits. Noise-cancellation technology seemingly eradicates it. Consequently headphone technology soundproofs us from the surrounding world. Michael Bull writes in Sound Moves:

“iPod culture concerms the seamless joining together of experiences in a flow, unifying the complex, contradictory and contingent nature of the world beyond the user […] Users report that iPod experience is at its most satisfying when no external sound seeps into their world to distract them from their dominant and dominating vision.”

Sound in the headphones vibrates in the private space of the individual listener. Conversely, sound in the cinema vibrates in the shared space of the congregated audience.

5. Listening: A Brief History of 20th Century Listening

The Sound Object

In the early 1950s, inspired by Phenomenology, French composer and engineer Pierre Schaeffer coined the term Reduced Listening to describe the new field of acousmatic research he was investigating.

“[He] gave the name reduced listening to the listening mode that focuses on the traits of the sound itself, independent of its cause and of its meaning” (Chion).

Schaeffer developed the idea of the l'objet sonore as the smallest self-contained acoustic element for analysis, categorisation, organisation and manipulation. A new magnetic tape music was born. Michel Chion writes, "concrete music, in its conscious refusal of the visual, carries with it visions that are more beautiful than images could ever be."

In developing his ideas on Reduced Listening, Schaeffer was influenced by ancient accounts of the Pythagorean order. In his book Sound Unseen: Acousmatic Sound in Theory and Practice, Brian Kane writes how:

“followers of Pythagoras underwent a three-year probationary period, directly followed by a five-year period of ‘silence’, before being admitted to Pythagoras' inner circle as mathêmatikoi (learned). The use of silence related to the protocols of rituals connected with the mystery-like instruction and religious ceremonies of the Pythagorean order. These ceremonies took place behind a veil or curtain with only those who had passed the five-year test being allowed to see their teacher face to face; the remaining students partaking acousmatically.”

Schaeffer coined the term the Acousmatique to define the listening experience of this new tape-based music that reached the listener via loudspeaker technology. The word comes from the Greek Akousmata (“oral saying”), considered to be the collection of all the sayings of Pythagoras as divine dogma.

A Purposeless Play

In America composer John Cage was interested in Eastern thought. Around the late 1940s an important influence on him was the art historian and philosopher of Indian art Ananda K. Coomaraswamy. Borrowing ideas from Coomaraswamy as well as from the Indian musician Gita Sarabhai, who Cage met in 1946, the composer claimed that the purpose of art was to imitate nature in her manner of operations. He went on to propose that the purpose of music “was to quiet and sober the mind, thus making it susceptible to divine influences”. In his 1961 book Silence: Lectures and Writings Cage famously wrote:

“Let sounds be themselves rather than vehicles for man-made theories of expressions of human sentiments.”

The Soundscape

Beginning in the late-1960s, Canadian composer R. Murray Schafer began to examine the relationship - mediated through sound - between human beings and their environment. This led to the development of Acoustic Ecology and Soundscape studies at Simon Fraser University in Vancouver. Schafer referred to the Soundscape as an acoustic environment consisting of events heard, rather than objects seen. In his 1977 book The Tuning of the World, Schafer describes Hi-Fi soundscapes as those spaces that preserve sonic clarity and perspective as a result of low background noise. Lo-Fi soundscapes on the other hand are found in loud, busy urban centres where there is “no distance, only presence”.

Writing in the early 1990s, electroacoustic composer Barry Truax, who along with Schafer was one of the original members of the Vancouver soundscape project, describes the variance in contextuality of acoustic sounds when compared with their electronic reproductions:

“In the acoustic world, sound is constrained by being tied to its context, in relation to which it derives at least part of its meaning. In the electroacoustic world, sound can be taken out of its original context and put via a loudspeaker into any other, where its meaning may be contradictory to that environment.”

For Schafer, the splitting of sound from source is a pathological (“schizophonic”) product of modern technology and mass urbanisation.

Deep Listening

In 1989 composer Pauline Oliveros coined the term Deep Listening to describe a radical practice of auditory attentiveness. She writes:

“Deep Listening involves going below the surface of what is heard, expanding to the whole field of sound while finding focus. This is the way to connect with the acoustic environment, all that inhabits it, and all that there is.”

Oliveros shared Cage and Schaeffer's interest in attending to the sounds themselves. However, she was also actively engaged in the neuroscience of various meditation practices, as well as the wider ecological field of all acoustic activity; the study of the relationships between living organisms, including humans, and their physical environment.

4. Listening: Three Modes of Listening

Michel Chion has described three listening modes: 

  1. Causal Listening - listening in order to gather information about its cause.

  2. Semantic Listening - referring to the interpretation of a code or language.

  3. Reduced Listening - a focus on the sound themselves, independent of its cause and meaning. 

These three modes of listening involve directed attention; the active and conscious choice to process sounds in order to grasp an understanding.

Conversely, hearing is the act of perceiving sounds indiscriminately.

The cinema is a site for both listening and hearing; active auditory attention and ambient reception.

3. Listening: The Real and the Realist

According to Robert Bresson the cacophony of reality captured in the filmmaking process must be tamed. Therefore the unwanted sounds detected by the microphone must be somehow controlled. This requires a team of sound editors and mixers tasked with organising the placement and level of individual sounds in order to bring definition and shape to the soundtrack.

During the 1960s Jean-Luc Godard explored some of the aesthetics of the Cinéma Vérité approach to documentary-filmmaking. In his 1962 film Vivre Sa Vie, Godard intentionally avoids any post-production voice replacement or studio sound effects. Instead, he records all voices and location sounds directly on to a single unedited track of tape.

Vivre Sa Vie [1962] dir. Jean-Luc Godard

Writing about the film at the time, author and film theorist Jean Collet said:

“Jean-Luc Godard’s idea was simple: apply to the sound the same demands as to the pictures. Capture life—in what it offers to be seen and heard—directly [...] The interest offered by this method is obvious: the director opts for the real rather than the realistic. Being “realistic” always implies having a point of view on what is real, an interpretation of the facts. Here, an attempt has been made, thanks to the special machines used, to establish a material point of view rather than a human judgment. The microphone is capturing what it picks up, just as the camera is, and the artist avoids intervening at this level of the creation.”

2. Listening: Signal and Noise

In Information Theory the signal represents any meaningful information one is trying to detect. The noise component is the random, unwanted variation or fluctuation that interferes with the signal. 

If the intentional sound of a film is the signal, then the noise is all sounds that the filmmaker wishes to reject.

Noise inevitably emerges through the filmmaking process. Noise is also present during public screenings.

1. Listening: Attention

Conventional approaches to film sound work are concerned with attention.

Auditory Neuroscientist Seth S. Horowitz writes that “the difference between the sense of hearing and the skill of listening is attention.” Attention is the faculty that joins us to the world.

The filmmaker orchestrates the various component parts of the soundtrack in order to direct the spectator/auditor’s attention to what is intended to be heard at any particular moment in time.

In film we both look and see, listen and hear.