AP Psychology

Module 20 - Hearing

LEARNING OBJECTIVES:

FOCUS QUESTION: What are the characteristics of air pressure waves that we hear as sound, and how does the ear transform sound energy into neural messages?

Like our other senses, our audition, or hearing, is highly adaptive. We hear a wide range of sounds, but the ones we hear best are those sounds with frequencies in a range corresponding to that of the human voice. Those with normal hearing are acutely sensitive to faint sounds, an obvious boon for our ancestors' survival when hunting or being hunted, or for detecting a child's whimper. (If our ears were much more sensitive, we would hear a constant hiss from the movement of air molecules.)

We are also remarkably attuned to variations in sounds. We easily detect differences among thousands of possible human voices: Walking between classes, we immediately recognize the voice of a friend behind us. A fraction of a second after a spoken word stimulates the ear's receptors, millions of neurons have simultaneously coordinated in extracting the essential features, comparing them with past experience, and identifying the stimulus (Freeman, 1991).

But not everyone has this ability. Some years ago, on a visit to my childhood home, I communicated with my then 80-year-old mother by writing on her erasable "magic pad." Four years earlier she had transitioned from hearing loss to complete deafness by giving up her now useless hearing aids.

"Do you hear anything?" I wrote.

"No,' she answered, her voice still strong although she could not hear it. "Last night your Dad came in and found the TV blasting. Someone had left the volume way up; I didn't hear a thing." (Indeed, my father later explained, he recently tested her by sneaking up while she was reading and giving a loud clap just behind her ear. Her eye never wavered from the page.)

What is it like, I wondered. "A silent world?"

"Yes," she replied. "It's a silent world."

And for her, with human connections made difficult, it became a socially isolated world. "Not having understood what was said in a group," she reminisced, "I would chime in and say the same thing someone else had just said - and everyone would laugh. I would be so embarrassed, I wanted to fall through the floor. " Increasingly, her way of coping was to avoid getting out onto the floor in the first place. She shied away from public events and found excuses to avoid people who didn't understand.

Our exchange left me wondering: Will I - having inherited her progressive hearing loss - also become socially isolated? Or, aided by today's better technology, can I keep my private vow not to repeat her past? Hearing allows mind - to - mind communication and enables connection. Yet many of us can and do connect despite hearing loss - with help from technology, lip - reading, and signing. For me, it's worth the effort. Communicating with others affirms our humanity as social creatures.

So, how does hearing normally work? How do we harvest meaning from the air pressure waves sent from another's mouth?

The Stimulus Input: Sound Waves

Draw a bow across a violin, and you will unleash the energy of sound waves. Jostling molecules of air, each bumping into the next, create waves of compressed and expanded air, like the ripples on a pond circling out from a tossed stone. As we swim in our ocean of moving air molecules, our ears detect these brief air pressure changes. (Exposed to a loud, low bass sound - perhaps from a bass guitar or a cello - we can also feel the vibration. We hear by both air and bone conduction.)

Like light waves, sound waves vary in shape. The amplitude of sound waves determines their loudness. Their length, or frequency, determines the pitch we experience. Long waves have low frequency - and low pitch. Short waves have high frequency - and high pitch. Sound waves produced by a violin are much shorter and fas ter than those produced by a cello or a bass guitar.

We measure sounds in decibels, with zero decibels representing the absolute threshold for hearing. Every 10 decibels correspond to a tenfold increase in sound intensity. Thus, normal conversation (60 decibels) is 10,000 times more intense than a 20-decibel whisper. And a temporarily tolerable 100-decibel passing subway train is 10 billion times more intense than the faintest detectable sound.

The Ear

The intricate process that transforms vibrating air into nerve impulses, which our brain decodes as sounds, begins when sound waves enter the outer ear. A mechanical chain reaction begins as the visible outer ear channels the waves through the auditory canal to the eardrum, a tight membrane, causing it to vibrate (FIGURE 20.1. In the middle ear three tiny bones (the hammer, anvil, and stirrup) pick up the vibrations and transmit them to the cochlea, a snail - shaped tube in the inner ear. The incoming vibrations cause the cochlea's membrane (the oval window) to vibrate, jostling the fluid that fills the tube. This motion causes ripples in the basilar membrane, bending the hair cells lining its surface, not unlike the wind bending a wheat field. Hair cell movement triggers impulses in the adjacent nerve cells. Axons of those cells converge to form the auditory nerve, which sends neural messages (via the thalamus) to the auditory cortex in the brain's temporal lobe. From vibrating air to fluid waves to electrical impulses to the brain: Voila! We hear.

My vote for the most intriguing part of the hearing process is the hair cells - "quivering bundles that let us hear" thanks to their "extreme sensitivity and extreme speed" (Goldberg, 2007). A cochlea has 16,000 of them, which sounds like a lot until we compare that with an eye's 130 million or so photoreceptors. But consider their responsiveness. Deflect the tiny bundles of cilia on the tip of a hair cell by the width of an atom - the equivalent of displacing the top of the Eiffel Tower by half an inch - and the alert hair cell, thanks to a special protein at its tip, triggers a neural response (Corey et al., 2004).

Damage to the cochlea's hair cell receptors or their associated nerves can cause sensorineural hearing loss (or nerve deafness). (A less common form of hearing loss is conduction hearing loss, caused by damage to the mechanical system that conducts sound waves to the cochlea.) Occasionally, disease causes sensorineural hearing loss, but more often the culprits are biological changes linked with heredity, aging, and prolonged exposure to ear - splitting noise or music.

Hair cells have been likened to carpet fibers. Walk around on them and they will spring back with a quick vacuuming. But leave a heavy piece of furniture on them for a long time and they may never rebound. As a general rule, if we cannot talk over a noise, it is potentially harmful, especially if prolonged and repeated (Roesser, 1998). Such experiences are common when sound exceeds 100 decibels, as happens in venues from frenzied sports arenas to bagpipe bands to personal music coming through our earphones near maximum volume (FIGURE 20.2). Ringing of the ears after exposure to loud machinery or music indicates that we have been bad to our unhappy hair cells. As pain alerts us to possible bodily harm, ringing of the ears alerts us to possible hearing damage. It is hearing's equivalent of bleeding.

The rate of teen hearing loss, now 1 in 5, has risen by one-third since the early 1990s (Shargorodsky et al., 2010). Teen boys more than teen girls or adults blast themselves with loud volumes for long periods (Zogby, 2006). Males' greater noise exposure may help explain why men's hearing tends to be less acute than women's. But male or female, those who spend many hours in a loud nightclub, behind a power mower, or above a jackhammer should wear earplugs. "Condoms or, safer yet, abstinence," say sex educators. "Earplugs or walk away," say hearing educators.

For now, the only way to restore hearing for people with nerve deafness is a sort of bionic ear - a cochlear implant, which, by 2009, had been given to 188,000 people worldwide (NIDCD, 2011). This electronic device translates sounds into electrical signals that, wired into the cochlea's nerves, convey information about sound to the brain. Cochlear implants given to deaf kittens and human infants seem to trigger an "awakening" of the pertinent brain area (Klinke et al., 1999; Sirenteanu, 1999). They can help children become proficient in oral communication (especially if they receive them as preschoolers or even before age 1) (Dettman et al., 2007; Schorr et al., 2005).

The latest cochlear implants also can help restore hearing for most adults. However, the implants will not enable normal hearing in adults if their brain never learned to process sound during childhood. Similarly, cochlear implants did not enable hearing in deaf-from-birth cats that received them when fully grown rather than as 8-week-old kittens (Ryugo et a1., 2010).

Perceiving Loudness

How do we detect loudness? It is not, as I would have guessed, from the intensity of a hair cell's response. Rather, a soft, pure tone activates only the few hair cells attuned to its frequency. Given louder sounds, neighboring hair cells also respond. Thus, the brain can interpret loudness from the number of activated hair cells.

If a hair cell loses sensitivity to soft sounds, it may still respond to loud sounds. This helps explain another surprise: Really loud sounds may seem loud to people with or without normal hearing. As a person with hearing loss, I used to wonder what really loud music must sound like to people with normal hearing. Now I realize it sounds much the same; where we differ is in our sensation of soft sounds. This is why we hard-of-hearing people do not want all sounds (loud and soft) amplified. We like sound compressed - which means harder-to-hear sounds are amplified more than loud sounds (a feature of to day's digital hearing aids).

Perceiving Pitch

What theories help us understand pitch perception?

How do we know whether a sound is the high-frequency, high-pitched chirp of a bird or the low-frequency, low-pitched roar of a truck? Current thinking on how we discriminate pitch, like current thinking on how we discriminate color, combines two theories.

Locating Sounds

FOCUS QUESTION: How do we locate sounds?

Why don't we have one big ear - perhaps above our one nose? "All the better to hear you with," as the wolf said to Red Riding Hood. As the placement of our eyes allows us to sense visual depth, so the placement of our two ears allows us to enjoy stereophonic ("three-dimensional") hearing.

Two ears are better than one for at least two reasons. If a car to the right honks, your right ear receives a more intense sound, and it receives sound slightly sooner than your left ear (FIGURE 20.3). Because sound travels 750 miles per hour and our ears are but 6 inches apart, the intensity difference and the time lag are extremely small. A just noticeable difference in the direction of two sound sources corresponds to a time difference of just 0.000027 second! Lucky for us, our supersensitive auditory system can detect such minute differences (Brown & Deffenbacher, 1979; Middlebrooks & Green, 1991).

Before You Move On

ASK YOURSELF: If you are a hearing person, imagine that you had been born deaf. Do you think your life would be different?

TEST YOURSELF: What are the basic steps in transforming sound waves into perceived sound?