How Does Hearing Work?

The ear is arguably one of the most complex organs in the human body, inextricably linked to how we experience and understand the world. When a sound wave causes the air to vibrate around us, those vibrations enter the ear canal and rebound off the eardrum. This rebound causes a series of tiny bones within the inner ear to move, known as the ossicles.

These three bones, known respectively as the malleus, incus, and stapes, connect to an organ known as the cochlea. This fluid-filled organ contains a series of microscopic hair cells, each group of which is tuned to a particular pitch and frequency. Stimulation of these cells sends nerve impulses along the auditory nerve to the brain.

Here's where we bridge the gap between hearing and understanding. 

The Importance of the Brain's Auditory Center

When the brain receives electrical impulses from the cochlea, it must immediately unravel, analyze, and interpret them. 

First, the brain's auditory center breaks down the complex waveforms of sound into their primary components,  pitch (frequency) and volume (amplitude). It then compares these component parts with stored patterns (memory). It can then identify and categorize the sound and its source and determine whether or not that sound requires our focus.
If, for example, you're having a conversation in a crowded restaurant, your brain might tune out background noise so you can focus on the person you're talking to. Without this automatic filtering, every single sound around us would receive equal attention. It would be impossible for us to focus on anything.
The ability to choose between relevant and irrelevant noise is localized in the brain's left hemisphere. Although the process is usually automatic, it's actually possible to consciously influence it by focusing on a specific sound. This could be a conversation elsewhere in a restaurant or a single instrument in a band's performance. 

A Supercomputer That Never Sleeps

Even when we're asleep, the auditory center of our brain, known also as the auditory cortex,  remains active. That said, in order to allow the rest of our brain and body to recover, it fades out the majority of surrounding sounds. That said, the brain is also trained to wake us up immediately if it determines something is amiss.

We might wake up to the crying of our child. The buzzing of our alarm clock. Or an unusual sound such as a scream or crash. 

An Internal Firewall: How The Brain Differentiates Between Relevant and Irrelevant

The brain's auditory cortex day and night acts as something of a natural filter, a safeguard against sensory overload. This internal firewall is essential to our health and well-being, especially in a world where our senses are constantly bombarded by stimuli. It's able to seamlessly differentiate between background noise and important sounds, identifying what's normal for a particular environment and picking out what's unusual.

Yet even though the brain plays such a critical role, it cannot function effectively without well-functioning ears.

It's only when the brain is supplied with complete, intact acoustic information that it's able to recognize which sounds are important and which should be suppressed. That's why conditions like hyperacusis so often tend to be psychosomatic in nature - the auditory cortex simply isn't functioning as it should. This is also why it's essential to look after your hearing and protect it at all times.
This means donning appropriate hearing protection in the workplace. It also means visiting a hearing specialist the moment you become aware of any hearing loss. Even a delay of one or two years may mean it's already too late for a hearing aid to be effective.

This is because the brain unlearns its ability to hear when it is not supplied with sufficient sounds over a long period of time

Protecting Your Brain: The Impact of Loud Noise

Loud environments with a mixture of different sounds always present a challenge in terms of hearing. If they're loud enough, they can even cause hearing loss, as the brain begins to 'unlearn' its capacity to effectively filter out sounds due to a combination of sensory overload and damage to the ears. This can make such environments increasingly frustrating as an individual becomes unable to follow conversations or identify sounds effectively.

This so-called cocktail party problem is an early warning sign of hearing loss and may be accompanied by tinnitus in some cases.  If certain frequencies become increasingly rare in the auditory center because someone is experiencing a loss of hearing, the brain attempts to “amplify” these frequencies, even when they are not actually present. As a result, a nonexistent sound is generated in the brain.
Tinnitus is not caused by a fault in the ear, but by alterations in the brain. Some experts refer to tinnitus as the “nerve cells talking to themselves".

Language: The Most Complex Auditory Process of All

At its core, speech is nothing more than a complex mixture of sound waves. Unraveling these sounds and deciphering what each one means is a major task for the brain, one which often takes many years to learn. It's actually rather impressive the brain can manage this at all, as it's a process marked by several massive obstacles.

How, for instance, does the auditory center subdivide a spoken sentence in such a way that its components can be reconciled with memorized patterns?
You might initially assume that each unit of speech is an individual word. However, this is not the case. In almost every language, each word flows smoothly into the next in speech. Letters aren't appropriate units of measurement here either, as they're pronounced differently depending on the context.

There are also variations in pitch, mood, voice, body language, accents, dialects, and speed.

Each and every one of these factors have a significant impact on not just the sound of the spoken word, but also it's meaning. Two people might utter an identical sentence, yet it could have a vastly different meaning. Yet somehow, the brain copes with all this complexity effortlessly and at tremendous speed.
On average, we can perceive and process up to 14 speech signals per second, and content becomes even easier to understand at 60 speech signals. Bear in mind, as well, that auditory processing is not the only thing the brain is doing in a conversation. It's parsing every last bit of information about our environment, including the sights, smells, feelings, and tastes.

Even modern supercomputers have difficulty competing with this raw processing power, and we have yet to develop voice recognition software that can understand spoken language as efficiently as the human brain.
Although we generally understand the mechanics by which the brain processes speech, we are yet to decipher every intricacy of how it works. Similar to the origins of language itself, much of the brain's functionality remains a mystery to us. Equally fascinating is the fact that the brain's auditory center is able to distinguish between our own voice and the voices of people around us.

Our own voice sounds the loudest to us - in order to hear ourselves against noisy backgrounds, the brain makes sure our voice is always loud enough for us to hear. 

The Auditory Center: Small, Powerful, and Essential

Perhaps the most fascinating thing about the brain's auditory center is its size. It's barely larger than your thumbnail and is in fact 'hidden' in a coil of the cerebral cortex. There's one auditory center in each cortex, and each one covers eleven different auditory fields.

Together, they are responsible for interpreting the entire range of perceptible frequencies.
Recent experiments have shown that there is apparently a division of labor between the left and right auditory centers. The left auditory cortex, for example, plays the main role in interpretation — that is, the recognition of acoustic signals. Scientists have also been able to demonstrate that the two sides of the auditory center are constantly engaged in a lively exchange.
It's not just with each other that the auditory centers communicate. Each and every one of our senses is interconnected. We know this from experience - it's easier to understand what someone is saying when we're looking at them when we can see and interpret their facial expression and read their lips.