Phần 5: Chất lượng âm thanh (Sound quanlity).
How to Prevent Distortion
Unwanted distortion is caused by a signal which is "too strong". If an audio signal level is too high for a particular component to cope with, then parts of the signal will be lost. This results in the rasping distorted sound.
To illustrate this point, the pictures below represent a few seconds of music which has been recorded by a digital audio program. The maximum possible dynamic range (the range from quietest to loudest parts) of the signal is shown as 0 to +/-100 units.
In the first example, the amplitude (strength / height) of the signal falls comfortably within the +/-100 unit range. This is a well-recorded signal.
In the second example, the signal is amplified by 250%. In this case, the recording components can no longer accommodate the dynamic range, and the strongest portions of the signal are cut off. This is where distortion occurs.
These examples can be used as an analogy for any audio signal. Imagine that the windows above represent a pathway through a component in a sound system, and the waves represent the signal travelling along the pathway. Once the component's maximum dynamic range is breached, you have distortion.
Distortion can occur at almost any point in the audio pathway, from the microphone to the speaker. The first priority is to find out exactly where the problem is.
Ideally, you would want to measure the signal level at as many points as possible, using a VU (Volume Unit) meter or similar device. Generally speaking, you should keep the level below about 0dBu at every point in the pathway.
If you can't measure the signal level, you'll have to do some deducing. Follow the entire audio pathway, beginning at the source (the source could be a microphone, tape deck, musical instrument, etc). Here are some things to look for:
- Is the distortion coming from a microphone? This could be caused by a very loud noise being too close to the mic. Try moving the mic further away from the noise source.
- Are you seeing any "peak" or "clip" lights on any of your equipment? These are warnings that a signal level is too high.
- Are any volume or gain controls in your system turned up suspiciously high? Are there any obvious points where you could drop the level?
- Are your speakers being driven too hard? If you have an amplifier which is pushing the speakers beyond their design limits, then be careful - you may well find that the distortion becomes permanent.
- If the distortion is coming from occasional peaking, consider adding a compressor.
- Could the distortion be caused by faulty equipment?
- Is the problem really distorion? There are some other unpleasant noises which could be confused with distorion; for example, the graunching sounds made by a dodgy cable connection or dirty volume knob.
How to Eliminate Feedback
Audio feedback is the ringing noise (often described as squealing, screeching, etc) sometimes present in sound systems. It is caused by a "looped signal", that is, a signal which travels in a continuous loop.
In technical terms, feedback occurs when the gain in the signal loop reaches "unity" (0dB gain).
One of the most common feedback situations is shown in the diagram below - a microphone feeds a signal into a sound system, which then amplifies and outputs the signal from a speaker, which is picked up again by the microphone.
Of course, there are many situations which result in feedback. For example, the microphone could be replaced by the pickups of an electric guitar. (In fact many guitarists employ controlled feedback to artistic advantage. This is what's happening when you see a guitarist hold his/her guitar up close to a speaker.)
To eliminate feedback, you must interrupt the feedback loop.
Here are a few suggestions for controlling feedback:
- Change the position of the microphone and/or speaker so that the speaker output isn't feeding directly into the mic. Keep speakers further forward (i.e. closer to the audience) than microphones.
- Use a more directional microphone.
- Speak (or sing) close to the microphone.
- Turn the microphone off when not in use.
- Equalise the signal, lowering the frequencies which are causing the feedback.
- Use a noise gate (automatically shuts off a signal when it gets below a certain threshold) or filter.
- Lower the speaker output, so the mic doesn't pick it up.
- Avoid aiming speakers directly at reflective surfaces such as walls.
- Use direct injection feeds instead of microphones for musical instruments.
- Use headset or in-ear monitors instead of speaker monitors.
You could also try a digital feedback eliminator. There are various models available with varying levels of effectiveness. The better ones are reported to produce reasonable results.
Feedback can occur at any frequency. The frequencies which cause most trouble will depend on the situation but factors include the room's resonant frequencies, frequency response of microphones, characteristics of musical instruments (e.g. resonant frequencies of an acoustic guitar), etc.
Feedback can be "almost there", or intermittent. For example, you might turn down the level of a microphone to stop the continuous feedback, but when someone talks into it you might still notice a faint ringing or unpleasant tone to the voice. In this case, the feedback is still a problem and further action must be taken.
Equalization, or EQ for short, means boosting or reducing (attenuating) the levels of different frequencies in a signal.
The most basic type of equalization familiar to most people is the treble/bass control on home audio equipment. The treble control adjusts high frequencies, the bass control adjusts low frequencies. This is adequate for very rudimentary adjustments — it only provides two controls for the entire frequency spectrum, so each control adjusts a fairly wide range of frequencies.
Advanced equalization systems provide a fine level of frequency control. The key is to be able to adjust a narrower range of frequencies without affecting neighbouring frequencies.
Equalization is most commonly used to correct signals which sound unnatural. For example, if a sound was recorded in a room which accentuates high frequencies, an equalizer can reduce those frequencies to a more normal level. Equalization can also be used for applications such as making sounds more intelligible and reducing feedback.
There are several common types of equalization, described below.
In shelving equalization, all frequencies above or below a certain point are boosted or attenuated the same amount. This creates a "shelf" in the frequency spectrum.
Bell equalization boosts or attenuates a range of frequencies centred around a certain point. The specified point is affected the most, frequencies further from the point are affected less.
Graphic equalizers provide a very intuitive way to work — separate slider controls for different frequencies are laid out in a way which represents the frequency spectrum. Each slider adjusts one frequency band so the more sliders you have, the more control.
A graphic equalizer is, as the name implies, an equalizer which uses a graphical layout to represent the changes made. It uses a series of sliders (usually vertical) which correspond to a set of fixed frequency bands. You raise or lower each slider to boost or lower (attenuate) the level of that frequency band.
Graphic equalizers are commonly referred to by the number of bands (e.g. 51-band, 31-band) or by the frequency separation of each band expressed in octaves (e.g. 2/3 octave, 1/3 octave, 1/6 octave).
Parametric equalizers use bell equalization, usually with knobs for different frequencies, but have the significant advantage of being able to select which frequency is being adjusted. Parametrics are found on sound mixing consoles and some amplifier units (guitar amps, small PA amps, etc).
The word parametric means something which has one or more parameters on which the outcome depends. When applied to audio equalization, this means equalization which depends on parameters such as centre frequency, bandwidth and amplitude. The user is able to adjust these parameters to determine exactly how the equalization is applied.
The most important feature of a parametric equaliser is that it allows you to select which frequency to adjust. For example, instead of having a simple mid-range adjustment which boosts or reduces a pre-set range of frequencies, you can specify exactly which mid-range frequency to boost or reduce. This gives you great flexibility and accuracy.
The illustration on the right shows parametric controls for upper-mid-range frequencies. These controls work together — the brown knob determines which frequency is to be adjusted (0.6KHz to 10KHz) and the green knob makes the adjustment (-15dB to +15dB).
Note that although you select a specific frequency, the actual adjustment will apply to frequencies above and below this frequency as well. This is why it is called the centre frequency — it is the frequency at the centre of the adjustment.
Example of use: Let's say you have a feedback problem somewhere in the 5KHz range but you aren't sure of the exact frequency. Turn the green knob right down, then slowly rotate the brown knob through the frequency range. As you do so, you will hear the selected frequencies being reduced. When you reach the frequency which is causing the feedback, the feedback will be reduced.
Bandwidth Control (Q)
As noted above, adjustments are made to a range of frequencies around the centre frequency. The bandwidth control determines how far above and below the centre frequency the adjustment will affect, i.e. the width or spread of frequencies.
A narrow bandwidth adjustment is very specific, useful for accurately removing or accentuating a specific frequency. This would be helpful in the feedback situation described above — once you have identified the offending frequency, reduce the bandwidth so you are adjusting the smallest range possible while still eliminating the feedback.
A broader bandwidth affects more frequencies, useful for adjusting a wider range such as the upper frequencies in a voice. Broader adjustments tend to sound more natural.
Note: Bandwidth controls are not available on all parametric equalizers.
The is the level of adjustment, measured in decibels (dB).
-Audio monitoring & metering.
-Thiết bị xử lý tín hiệu (processing).
-Kỹ xảo (effect).
Audio Monitoring & Metering
Audio Metering means using a visual display to monitor audio levels. This helps maintain audio signals at their optimum level and minimise degradation. There are two common types of meter which are used to measure audio levels:
- VU Meter (Volume Unit)
- PPM Meter (Peak Program)
Both types of meter are available in various forms including stand-alone units, components in larger systems, and software applications. Whatever the type of meter, two characteristics are important:
- The scale which defines which units are being measured.
- The ballistics of the meter which determine how fast it responds to sound and returns to a lower level.
A VU (volume unit) meter is an audio metering device. It is designed to visually measure the "loudness" of an audio signal.
The VU meter was developed in the late 1930s to help standardise transmissions over telephone lines. It went on to become a standard metering tool throughout the audio industry.
VU meters measure average sound levels and are designed to represent the way human ears perceive volume.
The rise time of a VU meter (the time it takes to register the level of a sound) and the fall time (the time it takes to return to a lower reading) are both 300 milliseconds.
The optimum audio level for a VU meter is generally around 0VU, often referred to as "0dB". Technically speaking, 0VU is equal to +4 dBm, or 1.228 volts RMS across a 600 ohm load.
VU meters work well with continuous sounds but poorly with fast transient sounds.
Peak Program Meter (PPM)
A Peak Program Monitor (PPM), sometimes referred to as a Peak Reading Meter (PRM), is an audio metering device. It's general function is similar to a VU meter but there are some important differences.
The rise time of a PPM (the time it takes to register the level of a sound) is much faster than a VU meter, typically 10 milliseconds compared to 300 milliseconds. This makes transient peaks easier to measure.
The fall time of a PPM (the time it takes the meter to return to a lower reading) is much slower.
PPM meters are very good for reading fast, transient sounds. This is especially useful in situations where pops and distortion are a problem.
Audio compression is a method of reducing the dynamic range of a signal. All signal levels above the specified threshold are reduced by the specified ratio.
The example below shows how a signal level is reduced by 2:1 (the output level above the threshold is halved) and 10:1 (severe compression).
How to Use a Compressor
Audio compression is a method of reducing the dynamic range of a signal.
You will need:
- A compressor with manual controls.
- An audio source to be compressed (eg. microphone, musical instrument, output of sound desk, etc).
- A destination device with which to feed the compressed output (eg. tape deck, sound desk, amplifier, etc).
- Connect the source to the compressor's input, and the compressor's output to the destination device.
- Adjust the compressor's input and output gains to appropriate levels.
- Set the threshold level to the point at which you wish compression to take effect. Signals below this level will not be affected. Signal levels above the threshold will be reduced according to the compression ratio.
- Set the compression ratio. Ratios of 5:1 or less will produce fairly smooth compression; ratios of 10:1 or more will produce more severe cutting off.
- Set the attack time. This is the delay between detection of a signal above the threshold, and the commencement of compression (ie. the time it takes to "attack" the signal).
- Set the decay time. This is the time taken to release the signal from compression.
- Adjust any other settings on the compressor. If you don't know what they are, try to put them on automatic, or disable them.
Set the compressor to a threshold of 0db, and a compression ratio of 3:1. In this case, all signals below 0db will be unaffected, and all signals above 0db will be reduced by 3db to 1 (ie. for every 1db input over 0db, 1/3db will be output).
A limiter is a type of compressor designed for a specific purpose — to limit the level of a signal to a certain threshold. Whereas a compressor will begin smoothly reducing the gain above the threshold, a limiter will almost completely prevent any additional gain above the threshold. A limiter is like a compressor set to a very high compression ratio (at least 10:1, more commonly 20:1 or more). The graph below shows a limiting ratio of infinity to one, i.e. there is no gain at all above a the threshold.
Input Level vs Output Level With Limiting Threshold
Limiters are used as a safeguard against signal peaking (clipping). They prevent occasional signal peaks which would be too loud or distorted. Limiters are often used in conjunction with a compressor — the compressor provides a smooth roll-off of higher levels and the limiter provides a final safety net against very strong peaks.
Audio expansion means to expand the dynamic range of a signal. It is basically the opposite of audio compression.
Like compressors and limiters, an audio expander has an adjustable threshold and ratio. Whereas compression and limiting take effect whenever the signal goes above the threshold, expansion effects signal levels below the threshold.
Any signal below the threshold is expanded downwards by the specified ratio. For example, if the ratio is 2:1 and the signal drops 3dB below the threshold, the signal level will be reduced to 6dB below the threshold. The following graph illustrates two different expansion ratios — 2:1 and the more severe 10:1.
Input Level vs Output Level With Expansion
An extreme form of expander is the noise gate, in which lower signal levels are reduced severely or eliminated altogether. A ratio of 10:1 or higher can be considered a noise gate.
Note: Some people also use the term audio expansion to refer to the process of decompressing previously-compressed audio data.
This page provides an overview of the most common audio effects used in sound production, with links to more detailed tutorials.
Equalization means boosting or reducing (attenuating) the levels of various frequencies in a signal. At it's most basic, equalization can mean turning the bass/treble controls up or down. Advanced equalizers have fine controls for specific frequencies.
Common uses for equalization include correct signals which sound unnatural and reducing feedback.
Compression & Limiting
Compression means reducing the dynamic range of a signal. All signal values above a certain adjustable threshold are reduced in gain relative to lower-level signals. This creates a more even signal level, reducing the level of the loudest parts.
Limiting is an extreme form of compression. Rather than smoothly reducing the gain of successively higher levels, all signal above the threshold is limited to the same gain. This creates a very hard cut-off point, over which there is no increase in level.
Expansion & Noise Gating
Expansion means increasing the dynamic range of a signal. High level signals maintain the same (or nearly the same) levels, low level signals are reduced (attenuated). This creates a greater range between quiet and loud. Expansion is the opposite of compression.
Noise gating is an extreme form of expansion — signals below a certain point are either heavily attenuated or eliminated completely. This leaves only higher level signals and removes background noise when the signal is not present.
Delay / Echo
Delay is a simple concept — the original audio signal is followed closely by a delayed repeat, just like an echo. The delay time can be as short as a few milliseconds or as long as several seconds. A delay effect can include a single echo or multiple echoes, usually reducing quickly in relative level.
Delay also forms the basis of other effects such as reverb, chorus, phasing and flanging.
Reverb is short for reverberation, the effect of many sound reflections occurring in a very short space of time. The familiar sound of clapping in an empty hall is a good example of reverb.
Reverb effects are used to restore the natural ambience to a sound, or to give it more fullness and body.
What is Reverb?
Reverberation, or reverb for short, refers to the way sound waves reflect off various surfaces before reaching the listener's ear.
The example on the right shows one person (the sound source) speaking to another person in a small room. Although the sound is projected most strongly toward the listener, sound waves also project in other directions and bounce off the walls before reaching the listener. Sound waves can bounce backwards and forwards many times before they die out.
When sound waves reflect off walls, two things happen:
- They take longer to reach the listener.
- They lose energy (get quieter) with every bounce.
The listener hears the initial sound directly from the source followed by the reflected waves. The reflections are essentially a series of very fast echoes, although to be accurate, the term "echo" usually means a distinct and separate delayed sound. The echoes in reverberation are merged together so that the listener interprets reverb as a single effect.
In most rooms the reflected waves will scatter and be absorbed very quickly. People are seldom consciously aware of reverb, but subconsciously we all know the difference between "inside sound" and "outside sound". Outside locations, of course, have no walls and virtually no reverb unless you happen to be close to reflective surfaces.
Some rooms result in more reverb than others. The obvious example is a hall with large, smooth reflective walls. When the hall is empty, reverb is most pronounced. When the hall is full of people, they absorb a lot of sound waves so reverb is reduced.
Reverberation can be added to a sound artificially using a reverb effect. This effect can be generated by a stand-alone reverb unit, the reverb effect in another device (such as a mixer or multi-effects unit), or by audio processing software.
There are three possible reasons for adding reverb:
- To restore the natural sound as the listener would expect to hear it. For example, a recording done in a very low-reverb studio might sound unnatural unless reverb is added.
- To enhance the sound. For example, it is common to give vocal recordings more reverb than what would be considered natural. Reverb helps fill out the voice, giving it more "body" and is usually considered to be a flattering effect. Reverb can even help smooth minor vocal fluctuations so they aren't as obvious.
- To create special effects such as dream sequences, etc.
Reverb is the most common audio effect, partly because it is used in so many situations from music studios to television production. Every sound operator should have a good understanding of reverb and how/when to apply it.
It pays to be judicious with reverb. Because it is so effective, it can easily be over-used. The right amount of reverb can do wonders for a singer's voice but too much sounds silly.
The photo below is a rack-mountable Lexicon PCM 81 Digital Effects Processor. This unit has a number of effects including reverb.
The screenshot below is from Adobe Audition, a sound editing package. It gives you an idea of some of the common reverb settings. Notice how most of the presets are described by the real-world effect they are simulating, for example, "Concert Hall" and "Medium Empty Room". This is common in reverb units.
Examples of Reverb
The following examples show how the reverb effect works. The first example is dry, meaning that it has no effects or other processing applied. The next two examples have different levels of reverb applied.
- Drums - Dry
- Drums - Medium Reverb
- Drums - Hall
The chorus effect is designed to make a signal sound like it was produced by multiple similar sources. For example, if you add the chorus effect to a solo singer's voice, the results sounds like.... a chorus.
Chorus works by adding multiple short delays to the signal, but rather than repeating the same delay, each delay is "variable length" (the speed and length of the delay changes). This adds the randomness required for the chorus sound. Varying the delay time also varies the pitch slightly, further adding to the "multiple sources" illusion.
The chorus effect was originally designed to make a single person's voice sound like multiple voices saying or singing the same thing, i.e. make a soloist into a chorus. It has since become a common effect used with musical instruments as well.
The effect is a type of delay — the original signal is duplicated and played at varying lengths and pitches. This creates the effect of multiple sources, as each source is very slightly out of time and tune (just as in real life). Technically, a chorus is similar to a flanger.
Common parameters include:
Number of Voices:
The number of times the source is multiplied.
The minimum delay length, typically 20 to 30 milliseconds.
Sweep Depth/ Width:
The maximum delay length.
The following example is the chorus settings window in Adobe Audition.
Phasing & Flanging
Phasing, AKA phase shifting, is a sweeping, whooshing effect often used in music. The effect is created by mixing the original signal with another version of itself which has been phase-shifted. This results in various out-of-phase interactions over time which gives the sweeping effect.
Phasing is created by adding evenly-spaced notches in the frequency response and moving them up and down the frequency spectrum.
Flanging is a specific type of phasing which uses notches that are "harmonically related", i.e. related to musical notes.
Phase Shifting (Phasing)
Phase-shifting, AKA phasing, is an audio effect which takes advantage of the way sound waves interact with each other when they are out of phase. By splitting an audio signal into two signals and changing the relative phasing between them, a variety of interesting sweeping effects can be created.
The phasing effect was first made popular by musicians in the 1960s and has remained an important part of audio work ever since.
Phasing is similar to flanging, except that instead of a simple delay it uses notch and boost filters to to phase-shift frequencies over time.
The following examples show some of the different types of phasing effects (MP3):
- Drums: Dry (original audio with no effect)
- Drums: Phased
- Drums: Crunchy Phase
- Drums: Trebly Phasing
- Drums: Bassy Phase
- Drums: Tremolo Phasing Left to Right
- Drums: Washy Phase Left to Right
- Drums: "Bubbles" Phase
The screenshot below is from Adobe Audition and shows some of the common settings available in phasing effects.
Flanging is a type of phase-shifting. It is an effect which mixes the original signal with a varying, slightly delayed version of the signal. The original and delayed signals are mixed more or less equally.
Flanging results in a sweeping sound — see the following example (MP3):
- Drums: Dry (original audio with no effect)
- Drums: Flanged
The term flanging comes from the days of reel-to-reel tape recording. The original signal was recorded on a second reel, and the delay was achieved by holding a finger or thumb on the edge (flange) of the reel to physically slow it down. Flanging was made popular during the psychedelic music era in the 1960s and 1970s.
The following example is the flanger settings window in Adobe Audition. It shows some of the settings commonly used in flanging:
(Xem tiếp phần 7: Màu sắc âm thanh, Noise, Colours & Types).