The perception of sound is one of the most basic parts of the human experience. We don’t realize how much information we get from our auditory system, and how vital it can be not only to ‘listening’ but also to perceiving the world around us on a deeper level.
In simple terms, there is audio we deliberately pay attention to, and audio that bypasses our higher functions and speaks to the animal part of our brain. When you’re chatting with someone on the phone, you’re focusing on what they are saying – but only one part of your brain is occupied with that. The same is true of listening to a podcast, or exercising to a virtual coaching app, or sitting and enjoying music.
So what else is going on while you’re listening, and how do your ears and brain process it? What’s so important that you have to devote brainpower to it when all you’re trying to do is have a conversation or learn something?
The human animal is still an animal
Our current civilization, with all its electronic distractions, is a mere blip in the hundreds of thousands of years that we’ve been recognizably human. Long before we domesticated animals, tilled the soil, or created the first tools, we were living in a world that demanded our attention for a lot of reasons, many of them with life-or-death consequences.
Those consequences shaped how we perceived the world back then, over a process of millions of years of evolution shared by many other animals, and our brains are still hard-wired for those basic needs. No matter how people like to joke or complain about how we’ve gotten to the point where we’re helpless without our devices, we’re all still cavemen inside. Our ears and brains are designed to help us stay alive in the world before civilization.
Where’d that sound come from?
The first and perhaps most important of our basic survival needs is spatial perception. When you’re a caveman going about your business and suddenly hear the growl of a predator or a cry for help from a friend, you need to know where it came from and what’s all around it – and you.
We don’t realize how much information we get from our auditory system, and how vital it can be not only to ‘listening’ but also to perceiving the world around us on a deeper level.
Did your buddy fall in a pit, or are they stuck on a rock ledge high above you? Is that growl coming from up in a tree behind you, or from the bushes right next to you? Even before you look around – incorporating data from your eyes into what your ears and brain have already processed – you have a fairly detailed idea of what’s going on.
If you think about it, it’s extraordinary that our ears can decode sound in three dimensions. After all, we have only two ears, and the distance between them is just a line in one dimension. Where does all that 3D stuff come from?
Your auditory system begins with your head. Before sound waves get to your eardrums, they interact with the shape of your head in various ways. In the most basic terms, if a sound comes from your left, it will hit your left eardrum sooner and have a different sound than what your right eardrum picks up. Your right ear can tell that your head has gotten in the way of the sound source. This is called a head-related transfer function or HRTF – a common consideration when discussing acoustics. Your brain puts together the changes in frequency and the time delay and tells you that the sound came from over that way.
This capability can be fine-tuned thanks to the pinnae, the fleshy outer part of the ear outside your ear canals. They help with finer localization of sound, for example helping you determine if a sound is above or below you, or behind or in front of you.
How good are these HRTFs in helping us spatialize sound? As it turns out, they can be excellent – even when they’re not produced by our own heads. An entire subset of boutique audio and music recording is devoted to binaural sound, where a dummy head is set up with microphones inside the ear canals, or a person walks around wearing a pair of microphones in their ears like earbuds. The HRTFs added to the recording by the head (dummy or otherwise) translate so well to what our brains expect that if we listen to the recording on headphones, the sense of “being there” can be incredibly convincing!
Where am I?
Now, this localization doesn’t happen in a void. No matter where you are, there are cues that your ear can pick up concerning your environment – the space you share with the sound you’re hearing.
If you close your eyes, have someone lead you into a place, and then clap your hands, you can tell a great deal from that handclap. The first return of the sound after you make it will give you a sense of how far the walls are from you. The sound of the reflections can give you a hint of the materials in the room, from a stone wall to a carpeted floor. Your localization tools can help you determine where the walls are and how high the ceiling is. When all of this is put together, you hear reverberation (reverb) – the sound of the space.
If you set up a single microphone in a room, sometimes you can detect information about the way the room sounds. A long reverb tail, for example, tells you the recording was made in a large space. When you go to stereo, the level of detail expands immensely, and the presence of your head takes it farther still.
That’s why even the best microphones set up next to each other won’t pick up as realistic a sound space as a dummy head. (Of course, ‘realistic’ and ‘musical’ aren’t the same thing, so those stereo mic arrays aren’t going out of fashion any time soon.)
What’s going on?
Once you know where you are and where other sound sources are, then you have to interpret what those sounds mean. Now your higher reasoning starts to kick in, but before it gets that far, your animal brain still has some work to do.
Let’s say your friend calling for help is near a waterfall. That constant rush of sound doesn’t just tell you about your environment; it can also obscure what your friend is trying to tell you. If your auditory system was based entirely on relative sound levels, you wouldn’t be able to understand what your friend was saying over the noise.
Fortunately, your brain has more tools to deal with this. It can filter out the noise and help you pick out the signals you want to hear, filling in frequency gaps and helping you process what is being communicated. The same would be true today, in picking out voices from the sounds of traffic, crowds, or heavy machinery.
A whole lotta computin’ goin’ on
When you put it all together, you can see that your brain is doing a whole lot of work behind the scenes. This workload can be significant, even if you’re not aware of it, and the effort it causes you can have an impact on the stuff that’s most important – not only hearing but understanding what you hear without fatigue. Minimizing or removing the need for that processing can revolutionize how we hear, and that is at the center of what Revive does for us.