What is the difference between place theory and frequency theory in regard to perception of pitch?

There are several theories that attempt to explain the perceptual processing of sound sensation. However, the most referred to hearing theories are the Place Theory and the Frequency Theory. These are two opposing theories that have been continuously developed until mid-20th century.

Also known as the Resonance Theory, this theory was proposed by Helmholtz in 1857. But, it is worthy to note that crude forms of the Place Theory had been created as early as 1605. Helmholtz' modern theory of hearing states that incoming sounds from the environment are, in a spectral representative form, extracted by the inner ear. The inner serves as a tuned resonator that passes the spectral representation to the brainstem, and then to the auditory cortex via the auditory nerve. The basilar membrane of the ear resonates the sound with a corresponding characteristic frequency or CF. For instance, if a sound stimulus has a tone of 300 Hz, the part of the basilar membrane that has a CF of 300 Hz would be stimulated. This process is also called frequency place-mapping.

Critics of the Place Theory of hearing argued that most often than not, characteristic frequencies are hard to determine below 120 Hz. Perception of sound stimuli accounting for low frequencies are associated with the frequency theory.

Frequency Theory

Rinne (1865) and Rutherford (1880) proposed the early forms of the Frequency theory of hearing. Their theories were known as telephone theories due to the similarity between the waveform of speech sound in a telephone line and the incoming sound signal to the human brain. The theory gives an assumption that the firing rate of the auditory nerve has a wide range of 20 to 20,000 times per second. This assumption is important in relation to the theory's suggestion that the incoming sound waveform has a time domain representation that is associated with the manner or rate at which the auditory nerve fires. The said time domain representation, as well as the frequency analysis, is theorized to be processed in the brain, rather than in the inner ear.

The studies done in the late 20th century have proven the Frequency Theory incorrrect in its assumption of the firing rate of the auditory nerve. Today, it is widely accepted that individual nerve fibers, including that of the auditory nerve, can only fire at a range of 300 to 500 times per second. Neural groups can only fire with frequencies up to 5000 Hz.

Most psychologists agree that hearing sound stimuli at low frequencies is accounted to the frequency theory, whereas those at high frequencies are attributed to the place principle. Sound stimuli in mid frequencies are believed to be rightfully accounted to both hearing theories.

Learning Objectives

  • Explain how we encode and perceive pitch and localize sound
  • Describe types of hearing loss
We know that different frequencies of sound waves are associated with differences in our perception of the pitch of those sounds. Low-frequency sounds are lower pitched, and high-frequency sounds are higher pitched. But how does the auditory system differentiate among various pitches? Several theories have been proposed to account for pitch perception. We’ll discuss two of them here: temporal theory and place theory.The  of pitch perception asserts that frequency is coded by the activity level of a sensory neuron. This would mean that a given hair cell would fire action potentials related to the frequency of the sound wave. While this is a very intuitive explanation, we detect such a broad range of frequencies (20–20,000 Hz) that the frequency of action potentials fired by hair cells cannot account for the entire range. Because of properties related to sodium channels on the neuronal membrane that are involved in action potentials, there is a point at which a cell cannot fire any faster (Shamma, 2001).The place theory of pitch perception suggests that different portions of the basilar membrane are sensitive to sounds of different frequencies. More specifically, the base of the basilar membrane responds best to high frequencies and the tip of the basilar membrane responds best to low frequencies. Therefore, hair cells that are in the base portion would be labeled as high-pitch receptors, while those in the tip of basilar membrane would be labeled as low-pitch receptors (Shamma, 2001).In reality, both theories explain different aspects of pitch perception. At frequencies up to about 4000 Hz, it is clear that both the rate of action potentials and place contribute to our perception of pitch. However, much higher frequency sounds can only be encoded using place cues (Shamma, 2001).

The ability to locate sound in our environments is an important part of hearing. Localizing sound could be considered similar to the way that we perceive depth in our visual fields. Like the monocular and binocular cues that provided information about depth, the auditory system uses both monaural (one-eared) and binaural (two-eared) cues to localize sound.

Each pinna interacts with incoming sound waves differently, depending on the sound’s source relative to our bodies. This interaction provides a that is helpful in locating sounds that occur above or below and in front or behind us. The sound waves received by your two ears from sounds that come from directly above, below, in front, or behind you would be identical; therefore, monaural cues are essential (Grothe, Pecka, & McAlpine, 2010).

, on the other hand, provide information on the location of a sound along a horizontal axis by relying on differences in patterns of vibration of the eardrum between our two ears. If a sound comes from an off-center location, it creates two types of binaural cues: interaural level differences and interaural timing differences. Interaural level difference refers to the fact that a sound coming from the right side of your body is more intense at your right ear than at your left ear because of the attenuation of the sound wave as it passes through your head. Interaural timing difference refers to the small difference in the time at which a given sound wave arrives at each ear (Figure 1). Certain brain areas monitor these differences to construct where along a horizontal axis a sound originates (Grothe et al., 2010).

What is the difference between place theory and frequency theory in regard to perception of pitch?
Figure 1. Localizing sound involves the use of both monaural and binaural cues. (credit “plane”: modification of work by Max Pfandl)

is the partial or complete inability to hear. Some people are born deaf, which is known as . Many others begin to suffer from because of age, genetic predisposition, or environmental effects, including exposure to extreme noise (noise-induced hearing loss), as shown in Figure 2, certain illnesses (such as measles or mumps), or damage due to toxins (such as those found in certain solvents and metals). Conductive hearing loss involves structural damage to the ear such as failure in the vibration of the eardrum and/or movement of the ossicles.

What is the difference between place theory and frequency theory in regard to perception of pitch?
Figure 2. Environmental factors that can lead to conductive hearing loss include regular exposure to loud music or construction equipment. (a) Rock musicians and (b) construction workers are at risk for this type of hearing loss. (credit a: modification of work by Kenny Sun; credit b: modification of work by Nick Allen)

Given the mechanical nature by which the sound wave stimulus is transmitted from the eardrum through the ossicles to the oval window of the cochlea, some degree of hearing loss is inevitable. With conductive hearing loss, hearing problems are associated with a failure in the vibration of the eardrum and/or movement of the ossicles. These problems are often dealt with through devices like hearing aids that amplify incoming sound waves to make vibration of the eardrum and movement of the ossicles more likely to occur.

When the hearing problem is associated with a failure to transmit neural signals from the cochlea to the brain, it is called sensorineural hearing loss. This type of loss accelerates with age and can be caused by prolonged exposure to loud noises, which causes damage to the hair cells within the cochlea. One disease that results in sensorineural hearing loss is . Although not well understood, Ménière’s disease results in a degeneration of inner ear structures that can lead to hearing loss, tinnitus (constant ringing or buzzing), (a sense of spinning), and an increase in pressure within the inner ear (Semaan & Megerian, 2011). This kind of loss cannot be treated with hearing aids, but some individuals might be candidates for a cochlear implant as a treatment option. are electronic devices that consist of a microphone, a speech processor, and an electrode array. The device receives incoming sound information and directly stimulates the auditory nerve to transmit information to the brain.

In the United States and other places around the world, deaf people have their own language, schools, and customs. This is called deaf culture. In the United States, deaf individuals often communicate using American Sign Language (ASL); ASL has no verbal component and is based entirely on visual signs and gestures. The primary mode of communication is signing. One of the values of deaf culture is to continue traditions like using sign language rather than teaching deaf children to try to speak, read lips, or have cochlear implant surgery.

When a child is diagnosed as deaf, parents have difficult decisions to make. Should the child be enrolled in mainstream schools and taught to verbalize and read lips? Or should the child be sent to a school for deaf children to learn ASL and have significant exposure to deaf culture? Do you think there might be differences in the way that parents approach these decisions depending on whether or not they are also deaf?

If you had to choose to lose either your vision or your hearing, which would you choose and why?

Licenses and Attributions (Click to expand)

CC licensed content, Original

  • Modification, adaptation, and original content. Provided by: Lumen Learning. License: CC BY: Attribution

CC licensed content, Shared previously

sound’s frequency is coded by the activity level of a sensory neuron

different portions of the basilar membrane are sensitive to sounds of different frequencies

one-eared cue to localize sound

two-eared cue to localize sound

sound coming from one side of the body is more intense at the closest ear because of the attenuation of the sound wave as it passes through the head

small difference in the time at which a given sound wave arrives at each ear

partial or complete inability to hear

failure in the vibration of the eardrum and/or movement of the ossicles

failure to transmit neural signals from the cochlea to the brain

results in a degeneration of inner ear structures that can lead to hearing loss, tinnitus, vertigo, and an increase in pressure within the inner ear

electronic device that consists of a microphone, a speech processor, and an electrode array to directly stimulate the auditory nerve to transmit information to the brain