tricks

Basics of compression

Posted by Giacomo on February 24, 2012
Tips and Tricks / Comments Off

Compressors are  a kind of dynamic processors that reduce the volume of loud sounds or amplify quiet sounds by diminishing an audio signal’s dynamic range.

Usually, by compression audio engineers mean downward compression, which reduces loud sounds over a certain threshold while quiet sounds remain unaffected.

The function of compression is  to make performances more consistent in dynamic range so that they “sit” in the mix of other instruments better and maintain consistent attention from the listener.

The following video will show you how to use a compressor to achieve this consistency, so you can hear the difference yourself!

Tags: , , , , ,

How to Mix Vocals in Pro tools

Posted by Giacomo on February 13, 2012
Tips and Tricks / Comments Off

Straight from SAE Online’s library, here it is a free mini-course on mixing vocal in pro tools!

Pro Tools is the industry standard DAW for recording and mixing, which is why knowing how to use its more advanced capabilities is extremely useful for upcoming sound engineers. In this video, SAE Online´s learning advisor  Phillip Zumbrunnen will give you some tips on creating better background vocals, to create the harmony and power that will your mixes to the next level!

To find out more about SAE Online’s Mixdown techniques 101 course, just click here.

Tags: , , , ,

Drum Trigger VST Approach

Posted by Giacomo on February 10, 2012
Tips and Tricks / Comments Off

Mixing with software on your PC can be a whole heap of fun and too many hours can be lost with fine detailed tweaks well into the early mornings. One technique that can stretch your valuable time is fixing the drums in the mix. If you’ve ever had to work with real recorded drums but just don’t like the original source recording all is not lost!

 

Drum Trigger VST Approach

The scenario is that we are working in your favourite editor/sequencer such as Cubase SX, for example, and we have loaded up our individual audio tracks such as the kick drum track, the snare drum track, the hi-hats and whatever else was originally recorded.

Often a recording engineer may use up to 12 channels or more for the drum kit alone in order to capture the unique sonic qualities from various positions. In this instance however we are assuming we are perhaps re-mixing the tune or simply do not like, say, the snare and kick sound.

Changing Drum Sounds

I hear you say well why not use some drum e.q, or, what about some drum compression to improve the ‘punch’. Can’t we just process the drum sounds until we hear something we like. Naturally this approach can yield some interesting sounds and maybe something workable for our re-mix, although we are talking about obtaining a completely new drum sound for the kick drum and snare drum, yet still retaining the original drummer’s feel and performance. We are not concerned with programming a new performance using the drum editor, we like the way the drums were played but simply do not like the kick and snare sounds.

VST Drum Triggers To The Rescue

A VST drum trigger is a useful plug-in for this purpose and I’m going to work with a FREE VST drum trigger for this article in order that we can all benefit and have a go. The plug-in is known as KTDrumTrigger programmed by SmartElectronix. A VST drum trigger is essentially a tool, which for a given audio input will output a MIDI note. ‘KTDrumTrigger is a VST plugin with custom editor that triggers MIDI notes based on the sound level of the incoming audio stream in different frequency bands. It allows you to ‘detect’ occurrences of percussive sounds in an audio stream and send out a MIDI event whenever that happens.’ Perfect for our needs.


Screen shot of the KTDrumTrigger highlighting the importance of the MIDI note number setting

VST Configuration

Back in Cubase SX I inserted KTDrumTrigger into my kick drum channel using the effects option, so now KTDrumTrigger would listen to every hit of the kick drum and output a MIDI note providing I tweak the settings and set the threshold etc appropriately.

I then created a new blank MIDI track, and on this track I set the input to my VST Drum Trigger so it was listening to the plug-in and I set the output of this MIDI track to play one of my VST samplers, in this case Sample Tank loaded with their FREE Acoustic Drum kit samples.


Cubase MIDI channel inputs and outputs

Tweaking the VST Drum Trigger

After some tweaking of the Drum Trigger you’ll find you can get just the right amount of hits converted to MIDI note information. I had to play around to find the correct sound for a while as all percussive sounds in the MIDI are associated with a MIDI note number, I found the kick drum on MIDI note number 40.

Recording The Drum Trigger

Naturally you could stop here and simply play the VST Drum Trigger live each time, and have your kick drum sample of choice playing along with the track, or go one step further and actually stick the MIDI channel into record and record the VST Drum Trigger output. Doing this can really help out with some more creative possibilities as you can then get into some real detailed editing, like maybe shifting one of the beats to the left or right, correct some bad timing errors. Maybe even duplicate the track a couple of times, and assign further sounds to the MIDI information, thus layering your mix with some rich sounds.

Keeping the Drums Real

I was cautious to maintain the feel of a real drummer playing, so with careful blending of the other drum tracks into the mix, (lots of over-head) I was able to keep the ‘feel’ and sound of a drummer and yet have a great punchy sounding kit.

There are other methods to Drum Triggering and lots of plug-ins but hopefully this might just set your mind thinking a little.

Author: Hambly

Tags: , ,

Mono Compatibility and Phase Cancellation

Posted by Giacomo on February 10, 2012
Tips and Tricks / Comments Off

When we listen back to music, whether it is through headphones or loudspeakers, what we are hearing are changes in air pressure caused by the movement of a diaphragm (i.e. the speaker cone).

 

The relationship between these differences in a stereo mix can have many unpredictable outcomes for the audio engineer, especially if a mix requires mono compatibility. This AudioCourses article will outline some of the issues that can occur

A simple way of imagining this phase relationship is to use a sine wave as an example. With our stereo sine wave you can see that the peaks and troughs of the waveform occur at the same time throughout the signal as illustrated in the screenshot below (left channel is top waveform, right channel is bottom waveform)

To hear this sine wave click here.

In speaker terms, the 0 on the vertical axis is when the speaker cone is ‘at rest’. When the waveform goes above 0 the speaker ‘pushes out’, whilst when the wave drops below the 0 the speaker ‘pulls inwards’. It is this movement of the cone which causes the changes in air pressure which we interpret as ‘sound’. Using the scale on the vertical axis, you can see that the waveform peaks at 100+/-. As the signal is ‘in phase’, we will hear the signal at the same amplitude at the same time in both ears.

Now let’s see what happens if we invert the polarity of one channel (in the case of a sine wave this is the same as changing the phase by 180° on one channel);

To hear this sine wave click here.

From the screenshot above it is clear what has happened to the waveform – one channel, in this case the right channel, has been turned ‘upside down’. The result of this is that the peaks and troughs in this waveform occur ‘out of phase’. Thinking in terms of speakers, what will happen now is that when your left speaker cone is fully pushed out, the right cone will be fully pulled in, therefore our ears are hearing ‘opposites’. Looking at the vertical axis again it is apparent that when we are hearing the signal at 100 in one ear, the signal will be -100 in the other. If you compare the 2 MP3 clips of the sine wave you should hear the the in phase waveform sounds ‘solid’. However the out of phase waveform will sound less ‘solid’ due to the conflicting levels arriving in each ear although it may appear more ‘stereophonic’ than the essentially mono original waveform. This apparent ‘stereo’ effect is one of the artifacts of poor phase relationships – the stereo image does not appear accurately to the listener. With a sine wave this is not as apparent, however with a full musical mix it is more disconcerting. Low frequencies are particularly affected by phase problems such as this.

Whilst it might appear that this is a problem caused during the mixing and or mastering process, think again. A surprising number of people in home studios have their speakers wired incorrectly which will ensure that one speaker is consistently 180°out of phase with the other – always ensure that your speakers are wired correctly to your amplifier. This goes for home stereos and entertainment systems too. Trying to mix with a monitoring setup that has inversed polarity on one channel is an absolute nightmare!

The most serious issue with inverted phase is when our audio material is played in mono (this could be on someone’s transistor radio, music on hold system, mono mp3s for web streaming etc) When a stereo file is converted into mono, the 2 channels are summed together. With an in phase signal, such as our original waveform, the channels reinforce each other due to their identical characteristics. A signal with a 180° phase shift, such as our second example, will actually result in the signal ‘disappearing.’ Looking at the values on the vertical axis it is easy to see why – an inverted phase signal will always have equal values of opposite polarity (+/-) If we were to add 50 to -50 we would end up with 0, and therefore silence! To prove the point here is the out of phase sine wave bounced down to mono within the audio editor;

As you can see there is nothing, zero, nada, zilch. The waveform sits completely on the 0 on the vertical axis. Of course if you play this back you will be met with silence as the speaker will not move and therefore no air will be displaced.

There are still a lot of mono playback systems in use so this should reinforce the need to regularly check your mixes in mono – most software packages allow you to do this in the master section of the mixer. Many analogue desks have either a mono button in the master section or a mono sum output so you can connect a mono speaker.

In this article we have talked about sine waves and inverted phase. Sine waves are a great way to demonstrate this principle but the sounds you are likely to record are much more complex waveforms, and the phase relationships are less likely to be 180° as phase relationships are frequency dependent. With more harmonically complicated waveforms artifacts such as phasing/comb filtering will occur which often manifest as periodic reinforcements and attenuations of the signal. It is not unknown for some software synth presets to output signals which all but disappear when mixed to mono.

To illustrate the effects of inverted polarity on one channel of a stereo mix, here are some demonstrations;

loop #1 – normal phase

loop #2 – inverted phase on right channel

loop #3 – mono mix of loop#2

The difference between the first 2 loops should be most obvious in terms of the ‘solidity’ of the stereo soundscape and the bass frequencies.

In the mono mix of the loop you can still hear some of the original audio but the majority has ‘disappeared’. Audio signals which were panned centrally (i.e. equal in both speakers) have been cancelled out whereas signals that were off centre (including FX returns from stereo reverbs and similar) or truly stereo have been retained.

To reiterate, the above examples are extreme examples of phase problems but they should give you an indication of some of the problems that poor phase alignment can cause.

There are some common recording situations when you should check the phase of the signals in your DAW;

The easiest one to visualize is when recording a source both acoustically and electronically at the same time. A great example of this is when a bass guitar is sent via a DI to the desk, and a microphone is placed in front of the bass amplifier. The short delay in the acoustic signal reaching the microphone before being converted into a voltage will inevitably cause phase problems. In this instance it is sensible to move the signal recorded by the mic back in time so that the phase relationship is maintained.

A similar issue occurs when using a mic both above and below a snare drum (or any drum for that matter). The mic on the top of the snare will react to the stick hitting the snare skin and the consequent movement of the skin away from the mic. The mic on the bottom of the drum will instead react to the skin moving towards it due to the contrasting positioning. Again, opening up your DAW and lining up the waveforms so they reinforce each other may provide a more satisfactory result.

Recording drums with many microphones such as overhead and close mics can cause myriad phase problems between all the mics. Again, zooming in on the waveforms within your DAW and shifting some of the waveforms around may provide a more ‘solid’ soundstage.

Tags: , ,

Live PA – Basic Sound Engineering Overview

Posted by Giacomo on February 10, 2012
Tips and Tricks / Comments Off

Live Public Address (Live Live P.A.) systems come in many different shapes and sizes and can often confuse the newbie into not knowing even just the basics. This article is aimed at giving a basic overview of non-specific equipment configurations in an attempt to de-mystify some of the typical errors a newbie can make with Live P.A. systems.

 

The function of a Live P.A.

Two types of Live P.A.

Reinforcement

  • This is for speech or music which would sound good in a small room without artificial assistance.
  • In the case of a classical guitar which is a very quiet instrument, this natural sound is only good for an audience of around 200 – 300 depending on room size.
  • For an audience of 500 – 600 it is possible to reinforce the sound so that everyone can hear clearly and to most people it will still sound natural.

Amplification

This is where the original sound is insignificant in comparison with the amount of sound coming from the Live P.A.

The aims of public address:

1.) To provide adequate volume ( not necessarily loud ).

2.) To provide adequate clarity.

A Thought on Acoustics

A discussion on sound reinforcement is impossible without a mention of acoustics.

The free field

If the venue is an outdoor event then the engineer need not concern himself/herself a great deal with acoustics as this is the ideal situation.

Sound in the open air travels away from the source and keeps going until it’s energy is used up ( inverse square law ). There are no walls for the sound to bounce off and return to interfere with the next wavefront.

Indoor Live P.A.

Sound behaves in much the same way as any other wave, it bounces off walls (reflects), and bends around them (diffracts), and cannot pass directly through materials. Therefore speaker placement becomes important as does speaker coverage.

Consider Figure 1.0.

The sound waves being ommited from the cabs have a coverage of 120 degrees therefore it can be seen that there will be obvious ‘blind spots’ in the coverage.

Figure 1.0 Shows a small venue and a typical coverage angle of a driver.

It should be pointed out that the directionality of sound waves is somewhat frequency dependent and that the above diagram shows a potential problem for high frequencies.

High frequencies have a smaller wavelength than low frequencies hence they are very directional ( λ = v/ƒ ), λ = wavelength, v = velocity, ƒ = frequency .

Objects placed in the Live path of HF block them, whereas LF tend to bend (diffract) around them. Therefore a subject positioned behind the wall in Figure 1.0. will hear an attenuation in HF hence a dull sound.

Golden rule number 1.

Always ensure nothing is in the line of sight of a HF driver otherwise you may encounter loss of HF.

Reflection and phase cancellation (comb filtering )

Consider Figure 1.1.

Live Live P.A.

Figure 1.1 Shows the Live path of a sound wave, the venue is assumed to have reflective surfaces and after a period of time the wave can be seen to have travelled around the room bouncing off the surfaces and crossing over other sound waves.

This situation can create what is know as ‘comb filtering’. As you can imagine the sound takes time to travel around the room (340 metres/sec) and if the reflected wave (having being time delayed) coincides with another wave whose polarity is the inverse or a fraction of, then cancellation will occur.

Conversely, if the combination of merging waves have the same polarity then addition will take place.

The name comb filtering is adopted because looking at the frequency response of the product, the shape of the teeth on a comb can be seen. Showing areas of addition and subtraction.

Live Live P.A.

Figure 1.2 The peaks and troughs of comb filtering can be seen in this frequency response plot.

Golden rule number two.

Ensure the minimum amount of reflection by pointing the speakers in a suitable direction.

The main thing is to keep comb filtering to a minimum this is sometimes easier said than done as most venues have reflective surfaces. There will always be pockets of ‘bad sound’ and pockets of ‘good sound’.

If you wonder round a venue and listen to the mix you will find these spots, it is your job as an engineer to keep the ‘bad spots’ to a minimum through speaker positioning, coverage and equalisation.

In professional venues architectural acousticians get paid lots of money to design environments which produce ‘good spots’ throughout the venue by such methods as absorption paneling.

Standing waves

Standing waves are a result of sound being reflected back and forth between two parallel surfaces.

As the first wave reflects it meets a newly arriving wave and the result can be that a stationary wave is produced which resonates at a frequency dependant on the transmitted waves and the distance between the Live parallel surfaces.

The wavelength of the transmitted waves in relation to the distances between the Live parallel surfaces is important for consideration then.

If this distance equals the wavelength or a ratio of it then a standing wave could be be made to oscillate.

Example

The wavelength of a 20 Hz wave is 17 metres, if this wave was transmitted between two Live parallel surfaces whose distance was 17 metres an oscillation could occur.

Standing waves can be a problem in venues where the dimensions of the venue coincide with Live particular wavelengths.

At LF, standing waves can ‘creep up’ on the engineer as they gather energy and appear to ‘feedback’, which could of course occur if the standing wave was picked up by the microphones on stage and amplified.

Careful use of room equalisation and speaker positioning can combat standing waves to a degree.

What’s wrong with a lot of Live P.A.’s

  • Low efficiency speaker systems
  • cure – ensure you have efficient speakers.
  • Not enough amplifier power
  • cure – ensure you have plenty of amp power
  • Poor frequency response
  • cure – ensure all components in the chain have a ‘flat’ response
  • Miss half your audience
  • cure – ensure you have enough speakers that are angled to cover everyone
  • Room reverberation swamps the sound
  • cure – choose speakers with suitable directional and dispersion qualities, thus avoiding reflective surfaces

Basic systems for two different sized rooms

Example 1

A small sized room having the dimensions of around 30 by 30 by 10 feet.

Live Live P.A.

Live Live P.A.

Figure 1.3

The system in block diagram form.

Live Live P.A.

Figure 1.4

This would be a suitable setup giving adequate coverage.

The power amp would be rated at around 150/200 watts per channel and the speakers would be full range.

Example two

A medium sized room having the dimensions of around 50 by 40 by 15 feet.

Live Live P.A.

Live Live P.A.

Figure 1.5

Live Live P.A.

Figure 1.6

This system is known as a two-way system, and for this room a total power of around 1KHZ would be adequate. The audio spectrum is split in two at around 250 Hz. Thus two power amplifiers are nessessary. Percentage wise the Low end would have around 65%. Leaving a further 35% for the mid and top end.

Large Live P.A.

Large Live P.A. systems can often be anything from two-way to five way and can contain massive amounts of drive untis for each seperate bandwidth.

A large Live P.A. system would have two engineers. One for the front of house one for the monitirs mix.

Often delayed loudspeakers are needed.

In a large open air concert say, a person standing 340 meters from the stage will not hear the emitted sound wave until one second has elapsed if he/she is standing 640 meters then two seconds will have elapsed before the sound can be heard.

This is due to the speed with wich sound travels through the air. i.e. 340m/s.

By the time the sound has travelled this distance it has suffered great losses. Therefore, further speakers will be needed for the audience to the rear of a concert.

The sound from these drivers need to delayed in order for the sound emitted from the main drivers to be of the same phase.

Live Live P.A.

Figure 1.7 The basic configuration of delayed loudspeakers.

The delayed signal could come from groups, from the main mix or aux’s etc.

Further points to note:

A rock Live P.A. should be as intelligable as a West end musical.

It should cover the audience evenly allthough this is not always possible.

The system should be visibly in tune with the type of work and surroundings.

 

References:

Electro voice, The Live P.A. bible.

Tags: , ,