What Does It Mean to Normalize Audio and Why Should I Do It?
To normalize audio is to change its volume to reach the desired decibel level–usually as loud as it can be, which for digital audio caps at 0 dBs. Audio normalization doesn’t affect the quality of the sound the way compression does–compression changes different parts of a clip by different amounts to make the audio volume more even relative to itself.
Why Normalize Audio
Streaming services set a standard normalization level across audio pieces hosted on their platforms so that listeners don’t have to constantly adjust the volume of different songs as they come up on their playlists. Every platform has a different target level, so you will probably have to normalize your audio by different amounts for each platform you want to upload your audio to.
In addition to uploading audio to streaming services, there are other reasons you would want to normalize audio. You may want to even out the volume levels of several different audio clips that you’ve edited together. For example, if you’ve recorded an album, you want the level of each song to be similar enough to the next one that listeners don’t have to adjust the volume each time and also don’t feel jarred when each new song comes on. The album should have a similar feel through each track.
How to Normalize Audio
There are different methods of audio normalization, and the method you use depends on how you measure the volume of the clips in question.
Peak Volume Normalization
You can use peak audio detection when normalizing audio, a process in which your program determines the volume of your file by measuring the volume of the highest peak. The program then raises the volume of those highest peaks to 0 dBFS (decibels relative to full sale), which is the loudest you can go in digital audio. Maximizing to 0 dBFS tends to be the default setting for most programs, but you can usually choose the decibel level to which you want to maximize your audio. The program will calculate the volume increase from the highest peaks of your audio to either 0 dBFS or the level you have set, and then increase the level of each part of your audio clip by that same amount.
You should know that this method of audio normalization raises the level of everything in your clip, including any unwanted background noise.
Loudness Volume Normalization
The loudness volume normalization process is more akin to the way the human ear works. The relationship between what frequency humans hear and what decibel level they perceive is not linear. For example, the human ear perceives a sustained sound as louder than a brief sound, even if both sounds are played at the same volume. Loudness normalization accounts for these eccentricities in human hearing.
Loudness normalization is measured in LUFs (loudness units relative to full scale) which is the standard measurement in film, TV, radio, and streaming services, as it more closely resembles the way humans perceive sound. 0 dBs remain the standard for loudness volume normalization as well.
RMS Volume Normalization
You can also use RMS (root mean square) volume detection when normalizing audio. This method calculates the overall loudness of an audio clip by averaging the highest and lowest peaks. It then increases your audio across the board by that computed average.
But RMS normalization does not perceive volume the way the human ear does. So it usually works better to use LUFs loudness units instead. The key is to listen to your audio and assess it as a whole, rather than only worrying about meeting a certain number of dBs to please Apple or Amazon.
When and Why Normalize Audio
In a world where video content is king, don’t get left behind. Whether you’re promoting a product, showcasing an athlete, or preserving a memory, Studio by MatchTune has the allows you to get professional-level audio synced to your video in a snap. We hope you’ve found this brief introduction to normalizing audio helpful as you learn more about creating great content to share with the world.
Comments