By Nick Kovarik
Acoustic or natural sound is the propagation of vibrating energy through a medium; typically air. When an object vibrates, the air molecules near it compress and stretch, radiating outwards. This process is called compression and rarefaction.
There are 5 major properties of acoustic sound. For simple digital recordings, only the first two are fundamental. Recall them with the mnemonic, Find A Very Wide Parachute.
- Frequency:
-
- The number of compression and rarefaction cycles that a vibration completes in one second, measured in hertz (Hz.)
- Most people are able to hear between 20 hz (ultra low frequencies) and up to 20 khz (20,000 hz, which are ultra high frequencies.)
- So we could say that the maximum range of human hearing is composed of approximately 10 octaves. For reference, a grand piano with 88 keys
- 20-40 hz, 40-80 hz, 80-160 hz, 160-320 hz, 320-640 hz, 640-1.28 Khz, 1.28-2.56 Khz, 2.56-5.12Khz, 5.12-10.24 Khz, and 10.24 Khz up to the maximum range of hearing.
- Amplitude:
- Amplitude is the power of sound waves, experienced as loudness. If we think of Frequency as an X-axis (representing time,) Amplitude would be the corresponding Y axis (representing intensity.)
- Loudness in audio is measured in decibels (dB) which has no specifically defined physical quantity. The decibel is a logarithmic function that is used to compare the ratio of acoustic energy between two sources; typically between acoustic energy (Sound Pressure Level / SPL) and an electrical signal.
- The reason the decibel is a logarithmic measurement is to make managing loudness simpler, because the human ear is capable of hearing acoustic energy at a ratio of 1:10,000,000 and greater. If our hands could measure weight the way our ears measure loudness, we could accurately weigh both a feather and an elephant.
- In general, an increase of 6dB is a doubling of perceived loudness, and a change in 1dB is about 12% of the signal. Small numbers in this case can make large differences.
- In terms of acoustic energy, 0 dB-SPL is the threshold of hearing, 60 dB-SPL is the loudness of an average conversation, 120 dB-SPL is known as the threshold of feeling, and 140 dB-SPL is the threshold of pain.
- Lastly, human beings are not sensitive to all frequencies equally. We are much more sensitive to midrange frequencies (about 500hz – 5khz) than we are high or low frequencies. Therefore, the frequency response of audio can perceptively change when we change its volume.
- Velocity:
- The speed of sound, which is approximately 1,130 feet per second at sea level at a temperature of 70° F. This is crucial for live sound in open air environments, but it varies wildly based on a myriad of factors, and is not usually a component of small scale recordings.
- Wavelength:
- The physical length of sound, which has an inverse relationship to the frequency; high frequencies have much shorter wavelengths than low frequencies. This is useful when considering professional or large scale acoustics.
- Phase:
- The time relationship between two or more sound waves at a given point in their compression and rarefaction cycles.
- Two identical sound waves, taken 180° out of phase, will cancel out into a null value and produces no sound. This is destructive interference
- Two identical sound waves, perfectly in phase, will increase the amplitude of that soundwave. This is known as constructive interference.
- If you are recording a single source, such as one voice, phase should not be an issue.
- If you are recording multiple sources simultaneously, simply follow the 3:1 ratio rule, which is that the distance between microphones should be 3 times the distance to their sources.
- Example: Two microphones are both 10 centimeters from their vocalists. Therefore, the distance between the two microphones should be at least 30 centimeters to avoid phasing issues.
- The time relationship between two or more sound waves at a given point in their compression and rarefaction cycles.
It is important to remember that in reality, every sound we hear is a mixture of different frequencies at different amplitudes in different environments. Therefore, any recording of natural sound will be composed of a wide range of frequencies and harmonics that create the timbre of that sound. This is how we can detect the difference between a violin and piano, even if they are playing the same note simultaneously.
More
Analog audio — Electronics tutorials