tips

Basics of compression

Posted by Giacomo on February 24, 2012
Tips and Tricks / Comments Off

Compressors are  a kind of dynamic processors that reduce the volume of loud sounds or amplify quiet sounds by diminishing an audio signal’s dynamic range.

Usually, by compression audio engineers mean downward compression, which reduces loud sounds over a certain threshold while quiet sounds remain unaffected.

The function of compression is  to make performances more consistent in dynamic range so that they “sit” in the mix of other instruments better and maintain consistent attention from the listener.

The following video will show you how to use a compressor to achieve this consistency, so you can hear the difference yourself!

Tags: , , , , ,

New York trick: Adding punch to your drums

Posted by Giacomo on February 16, 2012
Tips and Tricks / Comments Off

Learn how to better the rythm section in your mixes by applying smart compression techniques to

your tracks. In this videoblog brought to you by SAE Online you will learn about the infamous New York Trick.

The New York Trick is a form of upward compression, achieved by mixing an unprocessed ‘dry’ signal with a heavily compressed version of the same signal. Hence, it reduces the dynamic range by bringing up the softest sounds, adding sonic detail. It is most often used on stereo drum bus recording and mixdown, and on vocals in live concert mixes.

By careful setting of attack and release times on the compressor you may causes the signal to “pump” or “breathe” with the song, adding its own character to the sound.

To learn more about Drums recording for rock, check SAE Online’s VIP course by Dario Dendi, right here.

Tags: , , , , ,

How to Mix Vocals in Pro tools

Posted by Giacomo on February 13, 2012
Tips and Tricks / Comments Off

Straight from SAE Online’s library, here it is a free mini-course on mixing vocal in pro tools!

Pro Tools is the industry standard DAW for recording and mixing, which is why knowing how to use its more advanced capabilities is extremely useful for upcoming sound engineers. In this video, SAE Online´s learning advisor  Phillip Zumbrunnen will give you some tips on creating better background vocals, to create the harmony and power that will your mixes to the next level!

To find out more about SAE Online’s Mixdown techniques 101 course, just click here.

Tags: , , , ,

Drum Trigger VST Approach

Posted by Giacomo on February 10, 2012
Tips and Tricks / Comments Off

Mixing with software on your PC can be a whole heap of fun and too many hours can be lost with fine detailed tweaks well into the early mornings. One technique that can stretch your valuable time is fixing the drums in the mix. If you’ve ever had to work with real recorded drums but just don’t like the original source recording all is not lost!

 

Drum Trigger VST Approach

The scenario is that we are working in your favourite editor/sequencer such as Cubase SX, for example, and we have loaded up our individual audio tracks such as the kick drum track, the snare drum track, the hi-hats and whatever else was originally recorded.

Often a recording engineer may use up to 12 channels or more for the drum kit alone in order to capture the unique sonic qualities from various positions. In this instance however we are assuming we are perhaps re-mixing the tune or simply do not like, say, the snare and kick sound.

Changing Drum Sounds

I hear you say well why not use some drum e.q, or, what about some drum compression to improve the ‘punch’. Can’t we just process the drum sounds until we hear something we like. Naturally this approach can yield some interesting sounds and maybe something workable for our re-mix, although we are talking about obtaining a completely new drum sound for the kick drum and snare drum, yet still retaining the original drummer’s feel and performance. We are not concerned with programming a new performance using the drum editor, we like the way the drums were played but simply do not like the kick and snare sounds.

VST Drum Triggers To The Rescue

A VST drum trigger is a useful plug-in for this purpose and I’m going to work with a FREE VST drum trigger for this article in order that we can all benefit and have a go. The plug-in is known as KTDrumTrigger programmed by SmartElectronix. A VST drum trigger is essentially a tool, which for a given audio input will output a MIDI note. ‘KTDrumTrigger is a VST plugin with custom editor that triggers MIDI notes based on the sound level of the incoming audio stream in different frequency bands. It allows you to ‘detect’ occurrences of percussive sounds in an audio stream and send out a MIDI event whenever that happens.’ Perfect for our needs.


Screen shot of the KTDrumTrigger highlighting the importance of the MIDI note number setting

VST Configuration

Back in Cubase SX I inserted KTDrumTrigger into my kick drum channel using the effects option, so now KTDrumTrigger would listen to every hit of the kick drum and output a MIDI note providing I tweak the settings and set the threshold etc appropriately.

I then created a new blank MIDI track, and on this track I set the input to my VST Drum Trigger so it was listening to the plug-in and I set the output of this MIDI track to play one of my VST samplers, in this case Sample Tank loaded with their FREE Acoustic Drum kit samples.


Cubase MIDI channel inputs and outputs

Tweaking the VST Drum Trigger

After some tweaking of the Drum Trigger you’ll find you can get just the right amount of hits converted to MIDI note information. I had to play around to find the correct sound for a while as all percussive sounds in the MIDI are associated with a MIDI note number, I found the kick drum on MIDI note number 40.

Recording The Drum Trigger

Naturally you could stop here and simply play the VST Drum Trigger live each time, and have your kick drum sample of choice playing along with the track, or go one step further and actually stick the MIDI channel into record and record the VST Drum Trigger output. Doing this can really help out with some more creative possibilities as you can then get into some real detailed editing, like maybe shifting one of the beats to the left or right, correct some bad timing errors. Maybe even duplicate the track a couple of times, and assign further sounds to the MIDI information, thus layering your mix with some rich sounds.

Keeping the Drums Real

I was cautious to maintain the feel of a real drummer playing, so with careful blending of the other drum tracks into the mix, (lots of over-head) I was able to keep the ‘feel’ and sound of a drummer and yet have a great punchy sounding kit.

There are other methods to Drum Triggering and lots of plug-ins but hopefully this might just set your mind thinking a little.

Author: Hambly

Tags: , ,

Mono Compatibility and Phase Cancellation

Posted by Giacomo on February 10, 2012
Tips and Tricks / Comments Off

When we listen back to music, whether it is through headphones or loudspeakers, what we are hearing are changes in air pressure caused by the movement of a diaphragm (i.e. the speaker cone).

 

The relationship between these differences in a stereo mix can have many unpredictable outcomes for the audio engineer, especially if a mix requires mono compatibility. This AudioCourses article will outline some of the issues that can occur

A simple way of imagining this phase relationship is to use a sine wave as an example. With our stereo sine wave you can see that the peaks and troughs of the waveform occur at the same time throughout the signal as illustrated in the screenshot below (left channel is top waveform, right channel is bottom waveform)

To hear this sine wave click here.

In speaker terms, the 0 on the vertical axis is when the speaker cone is ‘at rest’. When the waveform goes above 0 the speaker ‘pushes out’, whilst when the wave drops below the 0 the speaker ‘pulls inwards’. It is this movement of the cone which causes the changes in air pressure which we interpret as ‘sound’. Using the scale on the vertical axis, you can see that the waveform peaks at 100+/-. As the signal is ‘in phase’, we will hear the signal at the same amplitude at the same time in both ears.

Now let’s see what happens if we invert the polarity of one channel (in the case of a sine wave this is the same as changing the phase by 180° on one channel);

To hear this sine wave click here.

From the screenshot above it is clear what has happened to the waveform – one channel, in this case the right channel, has been turned ‘upside down’. The result of this is that the peaks and troughs in this waveform occur ‘out of phase’. Thinking in terms of speakers, what will happen now is that when your left speaker cone is fully pushed out, the right cone will be fully pulled in, therefore our ears are hearing ‘opposites’. Looking at the vertical axis again it is apparent that when we are hearing the signal at 100 in one ear, the signal will be -100 in the other. If you compare the 2 MP3 clips of the sine wave you should hear the the in phase waveform sounds ‘solid’. However the out of phase waveform will sound less ‘solid’ due to the conflicting levels arriving in each ear although it may appear more ‘stereophonic’ than the essentially mono original waveform. This apparent ‘stereo’ effect is one of the artifacts of poor phase relationships – the stereo image does not appear accurately to the listener. With a sine wave this is not as apparent, however with a full musical mix it is more disconcerting. Low frequencies are particularly affected by phase problems such as this.

Whilst it might appear that this is a problem caused during the mixing and or mastering process, think again. A surprising number of people in home studios have their speakers wired incorrectly which will ensure that one speaker is consistently 180°out of phase with the other – always ensure that your speakers are wired correctly to your amplifier. This goes for home stereos and entertainment systems too. Trying to mix with a monitoring setup that has inversed polarity on one channel is an absolute nightmare!

The most serious issue with inverted phase is when our audio material is played in mono (this could be on someone’s transistor radio, music on hold system, mono mp3s for web streaming etc) When a stereo file is converted into mono, the 2 channels are summed together. With an in phase signal, such as our original waveform, the channels reinforce each other due to their identical characteristics. A signal with a 180° phase shift, such as our second example, will actually result in the signal ‘disappearing.’ Looking at the values on the vertical axis it is easy to see why – an inverted phase signal will always have equal values of opposite polarity (+/-) If we were to add 50 to -50 we would end up with 0, and therefore silence! To prove the point here is the out of phase sine wave bounced down to mono within the audio editor;

As you can see there is nothing, zero, nada, zilch. The waveform sits completely on the 0 on the vertical axis. Of course if you play this back you will be met with silence as the speaker will not move and therefore no air will be displaced.

There are still a lot of mono playback systems in use so this should reinforce the need to regularly check your mixes in mono – most software packages allow you to do this in the master section of the mixer. Many analogue desks have either a mono button in the master section or a mono sum output so you can connect a mono speaker.

In this article we have talked about sine waves and inverted phase. Sine waves are a great way to demonstrate this principle but the sounds you are likely to record are much more complex waveforms, and the phase relationships are less likely to be 180° as phase relationships are frequency dependent. With more harmonically complicated waveforms artifacts such as phasing/comb filtering will occur which often manifest as periodic reinforcements and attenuations of the signal. It is not unknown for some software synth presets to output signals which all but disappear when mixed to mono.

To illustrate the effects of inverted polarity on one channel of a stereo mix, here are some demonstrations;

loop #1 – normal phase

loop #2 – inverted phase on right channel

loop #3 – mono mix of loop#2

The difference between the first 2 loops should be most obvious in terms of the ‘solidity’ of the stereo soundscape and the bass frequencies.

In the mono mix of the loop you can still hear some of the original audio but the majority has ‘disappeared’. Audio signals which were panned centrally (i.e. equal in both speakers) have been cancelled out whereas signals that were off centre (including FX returns from stereo reverbs and similar) or truly stereo have been retained.

To reiterate, the above examples are extreme examples of phase problems but they should give you an indication of some of the problems that poor phase alignment can cause.

There are some common recording situations when you should check the phase of the signals in your DAW;

The easiest one to visualize is when recording a source both acoustically and electronically at the same time. A great example of this is when a bass guitar is sent via a DI to the desk, and a microphone is placed in front of the bass amplifier. The short delay in the acoustic signal reaching the microphone before being converted into a voltage will inevitably cause phase problems. In this instance it is sensible to move the signal recorded by the mic back in time so that the phase relationship is maintained.

A similar issue occurs when using a mic both above and below a snare drum (or any drum for that matter). The mic on the top of the snare will react to the stick hitting the snare skin and the consequent movement of the skin away from the mic. The mic on the bottom of the drum will instead react to the skin moving towards it due to the contrasting positioning. Again, opening up your DAW and lining up the waveforms so they reinforce each other may provide a more satisfactory result.

Recording drums with many microphones such as overhead and close mics can cause myriad phase problems between all the mics. Again, zooming in on the waveforms within your DAW and shifting some of the waveforms around may provide a more ‘solid’ soundstage.

Tags: , ,