mixing

Basics of compression

Posted by Giacomo on February 24, 2012
Tips and Tricks / Comments Off

Compressors are  a kind of dynamic processors that reduce the volume of loud sounds or amplify quiet sounds by diminishing an audio signal’s dynamic range.

Usually, by compression audio engineers mean downward compression, which reduces loud sounds over a certain threshold while quiet sounds remain unaffected.

The function of compression is  to make performances more consistent in dynamic range so that they “sit” in the mix of other instruments better and maintain consistent attention from the listener.

The following video will show you how to use a compressor to achieve this consistency, so you can hear the difference yourself!

Tags: , , , , ,

How to Mix Vocals in Pro tools

Posted by Giacomo on February 13, 2012
Tips and Tricks / Comments Off

Straight from SAE Online’s library, here it is a free mini-course on mixing vocal in pro tools!

Pro Tools is the industry standard DAW for recording and mixing, which is why knowing how to use its more advanced capabilities is extremely useful for upcoming sound engineers. In this video, SAE Online´s learning advisor  Phillip Zumbrunnen will give you some tips on creating better background vocals, to create the harmony and power that will your mixes to the next level!

To find out more about SAE Online’s Mixdown techniques 101 course, just click here.

Tags: , , , ,

Mono Compatibility and Phase Cancellation

Posted by Giacomo on February 10, 2012
Tips and Tricks / Comments Off

When we listen back to music, whether it is through headphones or loudspeakers, what we are hearing are changes in air pressure caused by the movement of a diaphragm (i.e. the speaker cone).

 

The relationship between these differences in a stereo mix can have many unpredictable outcomes for the audio engineer, especially if a mix requires mono compatibility. This AudioCourses article will outline some of the issues that can occur

A simple way of imagining this phase relationship is to use a sine wave as an example. With our stereo sine wave you can see that the peaks and troughs of the waveform occur at the same time throughout the signal as illustrated in the screenshot below (left channel is top waveform, right channel is bottom waveform)

To hear this sine wave click here.

In speaker terms, the 0 on the vertical axis is when the speaker cone is ‘at rest’. When the waveform goes above 0 the speaker ‘pushes out’, whilst when the wave drops below the 0 the speaker ‘pulls inwards’. It is this movement of the cone which causes the changes in air pressure which we interpret as ‘sound’. Using the scale on the vertical axis, you can see that the waveform peaks at 100+/-. As the signal is ‘in phase’, we will hear the signal at the same amplitude at the same time in both ears.

Now let’s see what happens if we invert the polarity of one channel (in the case of a sine wave this is the same as changing the phase by 180° on one channel);

To hear this sine wave click here.

From the screenshot above it is clear what has happened to the waveform – one channel, in this case the right channel, has been turned ‘upside down’. The result of this is that the peaks and troughs in this waveform occur ‘out of phase’. Thinking in terms of speakers, what will happen now is that when your left speaker cone is fully pushed out, the right cone will be fully pulled in, therefore our ears are hearing ‘opposites’. Looking at the vertical axis again it is apparent that when we are hearing the signal at 100 in one ear, the signal will be -100 in the other. If you compare the 2 MP3 clips of the sine wave you should hear the the in phase waveform sounds ‘solid’. However the out of phase waveform will sound less ‘solid’ due to the conflicting levels arriving in each ear although it may appear more ‘stereophonic’ than the essentially mono original waveform. This apparent ‘stereo’ effect is one of the artifacts of poor phase relationships – the stereo image does not appear accurately to the listener. With a sine wave this is not as apparent, however with a full musical mix it is more disconcerting. Low frequencies are particularly affected by phase problems such as this.

Whilst it might appear that this is a problem caused during the mixing and or mastering process, think again. A surprising number of people in home studios have their speakers wired incorrectly which will ensure that one speaker is consistently 180°out of phase with the other – always ensure that your speakers are wired correctly to your amplifier. This goes for home stereos and entertainment systems too. Trying to mix with a monitoring setup that has inversed polarity on one channel is an absolute nightmare!

The most serious issue with inverted phase is when our audio material is played in mono (this could be on someone’s transistor radio, music on hold system, mono mp3s for web streaming etc) When a stereo file is converted into mono, the 2 channels are summed together. With an in phase signal, such as our original waveform, the channels reinforce each other due to their identical characteristics. A signal with a 180° phase shift, such as our second example, will actually result in the signal ‘disappearing.’ Looking at the values on the vertical axis it is easy to see why – an inverted phase signal will always have equal values of opposite polarity (+/-) If we were to add 50 to -50 we would end up with 0, and therefore silence! To prove the point here is the out of phase sine wave bounced down to mono within the audio editor;

As you can see there is nothing, zero, nada, zilch. The waveform sits completely on the 0 on the vertical axis. Of course if you play this back you will be met with silence as the speaker will not move and therefore no air will be displaced.

There are still a lot of mono playback systems in use so this should reinforce the need to regularly check your mixes in mono – most software packages allow you to do this in the master section of the mixer. Many analogue desks have either a mono button in the master section or a mono sum output so you can connect a mono speaker.

In this article we have talked about sine waves and inverted phase. Sine waves are a great way to demonstrate this principle but the sounds you are likely to record are much more complex waveforms, and the phase relationships are less likely to be 180° as phase relationships are frequency dependent. With more harmonically complicated waveforms artifacts such as phasing/comb filtering will occur which often manifest as periodic reinforcements and attenuations of the signal. It is not unknown for some software synth presets to output signals which all but disappear when mixed to mono.

To illustrate the effects of inverted polarity on one channel of a stereo mix, here are some demonstrations;

loop #1 – normal phase

loop #2 – inverted phase on right channel

loop #3 – mono mix of loop#2

The difference between the first 2 loops should be most obvious in terms of the ‘solidity’ of the stereo soundscape and the bass frequencies.

In the mono mix of the loop you can still hear some of the original audio but the majority has ‘disappeared’. Audio signals which were panned centrally (i.e. equal in both speakers) have been cancelled out whereas signals that were off centre (including FX returns from stereo reverbs and similar) or truly stereo have been retained.

To reiterate, the above examples are extreme examples of phase problems but they should give you an indication of some of the problems that poor phase alignment can cause.

There are some common recording situations when you should check the phase of the signals in your DAW;

The easiest one to visualize is when recording a source both acoustically and electronically at the same time. A great example of this is when a bass guitar is sent via a DI to the desk, and a microphone is placed in front of the bass amplifier. The short delay in the acoustic signal reaching the microphone before being converted into a voltage will inevitably cause phase problems. In this instance it is sensible to move the signal recorded by the mic back in time so that the phase relationship is maintained.

A similar issue occurs when using a mic both above and below a snare drum (or any drum for that matter). The mic on the top of the snare will react to the stick hitting the snare skin and the consequent movement of the skin away from the mic. The mic on the bottom of the drum will instead react to the skin moving towards it due to the contrasting positioning. Again, opening up your DAW and lining up the waveforms so they reinforce each other may provide a more satisfactory result.

Recording drums with many microphones such as overhead and close mics can cause myriad phase problems between all the mics. Again, zooming in on the waveforms within your DAW and shifting some of the waveforms around may provide a more ‘solid’ soundstage.

Tags: , ,