So that’s how you hear that beat…

Isha Sharma
8 min readJan 16, 2021


We’re perpetually surrounded by a variety of sounds that range from gentle swishing sounds, like the rustling of leaves, to noisy rumbling sounds, like the purring of a car’s engines. The human ear has a tremendous hearing range (audible range) and sensitivity, which enables us to perceive a variety of sounds. They help us gather a lot of information like from pitch, loudness, and direction, to the music quality or the nuances of voiced emotion. The ears are designed to collect the sounds and pass them to the brain, which helps us recognize and interpret the different sounds. The ability to hear helps enhance the experiences that life has bestowed upon us.

Sound is produced when a vibrating object produces pressure variations in a transmitting medium, like air. The molecules present in the air, oscillate back and forth, hitting the air molecule next to it, which also then oscillates back and forth, creating a longitudinal wave. A longitudinal wave develops compression (high-pressure region), when molecules are together, and rarefaction (low-pressure region), when molecules are apart, while traveling through space. These sound waves, propagating outward from a vibrating object, are then collected by the auricle (pinna), the visible portion of our outer ear, which then reach the tympanic membrane (eardrum), causing it to vibrate and initiate the process of hearing.

Our auditory system has a complex anatomy, it is divided into two parts, ‘peripheral’ and ‘central’. The peripheral hearing system consists of three parts that are the outer ear, the middle ear, and the inner ear. The outer ear consists of the auricle, ear canal, and tympanic membrane. The middle ear is a small, air-filled cavity containing three tiny bones collectively known as the ossicles (malleus (hammer), incus (anvil), stapes (stirrup)) and the Eustachian tube. The malleus connects to the outer ear and the stapes connects to the inner ear. The inner ear consists of the cochlea, which has a distinctive coil shape. The cochlea, which contains thousands of sensory cells (hair cells/ Stereocilia), is connected to the central system by the auditory nerve. The cochlea is filled with fluids, namely perilymph and endolymph, which are important to the process of hearing. On the other hand, the central hearing system consists of the auditory nerve and an amazingly intricate pathway to the brainstem and forward to the auditory cortex in the brain. The ear is termed as a very efficient transducer (a device that converts energy from one form to another), as it converts sonic vibrations into electrical signals that are interpreted by the brain as music, noise, etc.

The sounds caught by the auricle, traveled through the ear canal, and vibrated through the tympanic membrane, have a long journey to reach the brain and be interpreted. After the tympanic membrane vibrates, the vibrations move the three ossicles in the middle ear. These bones amplify the vibrations and the stapes push them to the cochlea in the inner ear. This is where the sound waves become nerve pulses, the only kind of information that the brain understands. Once the vibrations cause the fluids inside the cochlea to ripple, thousands of tiny hair cells sitting on top of the basilar membrane ride the wave. Hair cells near the thicker end of the cochlea detect higher-pitched sounds (higher frequencies), such as a high-pitched whistle.

On the other hand, those closer to the center detect lower-pitched sounds (lower frequencies), such as the sound of a bass drum. The stimulation of hair cells moving in a wavy motion results in the release of neurotransmitters (chemicals that cross a synapse and transmit the electrical signals to neurons, a cell which transmits nerve impulses), which are released through the channels at the tips of the hair cells. When these neurotransmitters are rushed into the cells, they create electric impulses, which are then sent to the auditory nerve that leads to the auditory cortex, a part of the temporal lobe in the brain, where the signal is interpreted as sound.

Between the cochlea and the brain, there are approximately thousands of neurons. Each neuron represents a different frequency. For instance, when we hear a sound at 1000 Hz. The tympanic membrane vibrates 1000 times per second, along with the three middle ear bones, but the nerves won’t send a thousand pulses per second to the brain. Instead, it will be just a few neurons somewhere in the middle of the cochlea that will send a few pulses as long as the sound is there. So, the brain gets a compressed version of the sound, similar to an mp3 music file. If the sound is a square wave at 1 kHz, the cochlea would tell the brain that this sound has 1 kHz as the dominant frequency mixed with other frequencies. The brain first identifies the dominant frequency, then it measures the relative strength of each harmonic. This ratio is the signature that allows distinguishing each musical instrument in a song or different animals in a forest soundscape.

The reason why we can perceive several different sounds is because of our audible range, which is between 20 Hz (lowest pitch) to 20 kHz (highest pitch). Acoustic vibrations, outside this specific band of frequencies, are not considered as “sounds” to us, even if they are very well perceived by other animals. All sounds below 20 Hz are termed as infrasound, which can be heard by elephants and moles, and all sounds above 20 kHz are termed as ultrasound, which can be heard by cats, dogs, dolphins, and bats. Similarly, our loudness range is usually represented as 0dB — 100dB Sound Pressure Level (SPL), where SPL is the pressure level of a sound, measured in Decibels (dB), the degree of loudness. For 0dB, we can’t say it is the absence of sound but it is a sound very faintly audible to a few people, on the other hand, 100dB would be an uncomfortably loud sound.

According to Stacy Lee Ware in her paper “Human Hearing Loss” the sounds with more than 120dB are pegged as the “threshold of pain”, but long-term exposure to sounds as high as 85dB can result in permanent hearing loss or Tinnitus, a condition in which continuous ringing sounds are heard in both ears, as the cochlear hair can’t be regenerated once they are damaged. On the other hand, infectious diseases and aging also affect our hearing ability. However, these severe conditions could be somewhat cured by the usage of electronic hearing aids, cochlear implants, bone-anchored devices, etc. These devices help deliver the information to the brain without any hindrances. Also, when we are exposed to loud music it is highly advised to use earmuffs or earplugs to cease the sound damaging our precious ears.

The Fletcher-Munson curve

Our ears are not equally receptive to all frequencies in the audible range. A sound at a frequency may sound louder than one of one at a different frequency. This is shown in the Fletcher–Munson curves (equal-loudness contour), which is a measure of loudness presented with pure steady tones (the sounds that fluctuate sinusoidally as a fixed frequency) in the sound pressure level vs frequency graph. It describes the sensitivity of the human ear to different frequencies at different loudness. The unit of measurement for perceived loudness is Phons, each red curve has a phon value in blue numbering. According to Dustin Chaffin, in this article “Equal Loudness Contour (and How to Use It)”, this graph is telling us that “at 40 phons, you’ll need about 63dB (SPL) at 100Hz to equal the perceived loudness of 3500Hz at 34 decibels”. This means that our ears are more sensitive to mid-high frequencies than to low frequencies at low volumes. Our ears are also most receptive to the frequencies of the human voice, which is in the 1kHz — 5kHz range (the lowest “dip” in the graph, indicating the lowest volume required to be heard). These are the frequencies that help us recognize consonants, for example like pan, man, fan. In the graph, between 60dB and 80dB, the curve flattens a bit, making that a good level for consuming regular music, and these volumes are the ones mostly used by a majority of the musicians to give their music a fuller range.

Each sound wave our brain has ever interpreted has traveled a lot before being interpreted, this journey of sound waves is what makes us understand the world around us. Since the vibration of the particles near the sound source to the interpretation of the sound, we don’t really see how much our body has worked or is working, but after learning this, we might be amazed by the functioning of our bodies. Now, we know why certain frequencies are inaudible or uncomfortable for us. We, certainly, have to take care of our ears as they are sensitive, once our hair cells in the inner ear are damaged, permanent hearing loss may occur, which will restrict our understanding of our surroundings. However, if we properly take care of our ears and not listen to extremely loud sounds, we can persist longer and interpret this world better. with our favorite song playing behind us.


“Transmission of sound within the inner ear”, Britannica, date accessed: December 17, 2020,

“How Do We Hear?”, January 3, 2018, The National Institute on Deafness and Other Communication Disorders (NIDCD) Information Clearinghouse, date accessed: December 17, 2020,

Long, Marshall. March 3, 2014, “Human Perception and Reaction to Sound”, Architectural Acoustics, (Second Edition), date accessed: December 17, 2020, https://www-

Chaffin, Dustin M. July 26, 2016, “Equal Loudness Contour (and How to Use It)”, Medium, date accessed: December 17, 2020, how-to-use-it-55cf77661156

Ware, Stacy Lee. May 1, 2014, “Human Hearing Loss”, PeerJ PrePrints,



Isha Sharma

A sucker for creative coding, poetry, hauntingly beautiful songs of despair, serendipity || CS