Live PA – Basic Sound Engineering Overview

Posted by Giacomo on February 10, 2012
Tips and Tricks / Comments Off on Live PA – Basic Sound Engineering Overview

Live Public Address (Live Live P.A.) systems come in many different shapes and sizes and can often confuse the newbie into not knowing even just the basics. This article is aimed at giving a basic overview of non-specific equipment configurations in an attempt to de-mystify some of the typical errors a newbie can make with Live P.A. systems.


The function of a Live P.A.

Two types of Live P.A.


  • This is for speech or music which would sound good in a small room without artificial assistance.
  • In the case of a classical guitar which is a very quiet instrument, this natural sound is only good for an audience of around 200 – 300 depending on room size.
  • For an audience of 500 – 600 it is possible to reinforce the sound so that everyone can hear clearly and to most people it will still sound natural.


This is where the original sound is insignificant in comparison with the amount of sound coming from the Live P.A.

The aims of public address:

1.) To provide adequate volume ( not necessarily loud ).

2.) To provide adequate clarity.

A Thought on Acoustics

A discussion on sound reinforcement is impossible without a mention of acoustics.

The free field

If the venue is an outdoor event then the engineer need not concern himself/herself a great deal with acoustics as this is the ideal situation.

Sound in the open air travels away from the source and keeps going until it’s energy is used up ( inverse square law ). There are no walls for the sound to bounce off and return to interfere with the next wavefront.

Indoor Live P.A.

Sound behaves in much the same way as any other wave, it bounces off walls (reflects), and bends around them (diffracts), and cannot pass directly through materials. Therefore speaker placement becomes important as does speaker coverage.

Consider Figure 1.0.

The sound waves being ommited from the cabs have a coverage of 120 degrees therefore it can be seen that there will be obvious ‘blind spots’ in the coverage.

Figure 1.0 Shows a small venue and a typical coverage angle of a driver.

It should be pointed out that the directionality of sound waves is somewhat frequency dependent and that the above diagram shows a potential problem for high frequencies.

High frequencies have a smaller wavelength than low frequencies hence they are very directional ( λ = v/ƒ ), λ = wavelength, v = velocity, ƒ = frequency .

Objects placed in the Live path of HF block them, whereas LF tend to bend (diffract) around them. Therefore a subject positioned behind the wall in Figure 1.0. will hear an attenuation in HF hence a dull sound.

Golden rule number 1.

Always ensure nothing is in the line of sight of a HF driver otherwise you may encounter loss of HF.

Reflection and phase cancellation (comb filtering )

Consider Figure 1.1.

Live Live P.A.

Figure 1.1 Shows the Live path of a sound wave, the venue is assumed to have reflective surfaces and after a period of time the wave can be seen to have travelled around the room bouncing off the surfaces and crossing over other sound waves.

This situation can create what is know as ‘comb filtering’. As you can imagine the sound takes time to travel around the room (340 metres/sec) and if the reflected wave (having being time delayed) coincides with another wave whose polarity is the inverse or a fraction of, then cancellation will occur.

Conversely, if the combination of merging waves have the same polarity then addition will take place.

The name comb filtering is adopted because looking at the frequency response of the product, the shape of the teeth on a comb can be seen. Showing areas of addition and subtraction.

Live Live P.A.

Figure 1.2 The peaks and troughs of comb filtering can be seen in this frequency response plot.

Golden rule number two.

Ensure the minimum amount of reflection by pointing the speakers in a suitable direction.

The main thing is to keep comb filtering to a minimum this is sometimes easier said than done as most venues have reflective surfaces. There will always be pockets of ‘bad sound’ and pockets of ‘good sound’.

If you wonder round a venue and listen to the mix you will find these spots, it is your job as an engineer to keep the ‘bad spots’ to a minimum through speaker positioning, coverage and equalisation.

In professional venues architectural acousticians get paid lots of money to design environments which produce ‘good spots’ throughout the venue by such methods as absorption paneling.

Standing waves

Standing waves are a result of sound being reflected back and forth between two parallel surfaces.

As the first wave reflects it meets a newly arriving wave and the result can be that a stationary wave is produced which resonates at a frequency dependant on the transmitted waves and the distance between the Live parallel surfaces.

The wavelength of the transmitted waves in relation to the distances between the Live parallel surfaces is important for consideration then.

If this distance equals the wavelength or a ratio of it then a standing wave could be be made to oscillate.


The wavelength of a 20 Hz wave is 17 metres, if this wave was transmitted between two Live parallel surfaces whose distance was 17 metres an oscillation could occur.

Standing waves can be a problem in venues where the dimensions of the venue coincide with Live particular wavelengths.

At LF, standing waves can ‘creep up’ on the engineer as they gather energy and appear to ‘feedback’, which could of course occur if the standing wave was picked up by the microphones on stage and amplified.

Careful use of room equalisation and speaker positioning can combat standing waves to a degree.

What’s wrong with a lot of Live P.A.’s

  • Low efficiency speaker systems
  • cure – ensure you have efficient speakers.
  • Not enough amplifier power
  • cure – ensure you have plenty of amp power
  • Poor frequency response
  • cure – ensure all components in the chain have a ‘flat’ response
  • Miss half your audience
  • cure – ensure you have enough speakers that are angled to cover everyone
  • Room reverberation swamps the sound
  • cure – choose speakers with suitable directional and dispersion qualities, thus avoiding reflective surfaces

Basic systems for two different sized rooms

Example 1

A small sized room having the dimensions of around 30 by 30 by 10 feet.

Live Live P.A.

Live Live P.A.

Figure 1.3

The system in block diagram form.

Live Live P.A.

Figure 1.4

This would be a suitable setup giving adequate coverage.

The power amp would be rated at around 150/200 watts per channel and the speakers would be full range.

Example two

A medium sized room having the dimensions of around 50 by 40 by 15 feet.

Live Live P.A.

Live Live P.A.

Figure 1.5

Live Live P.A.

Figure 1.6

This system is known as a two-way system, and for this room a total power of around 1KHZ would be adequate. The audio spectrum is split in two at around 250 Hz. Thus two power amplifiers are nessessary. Percentage wise the Low end would have around 65%. Leaving a further 35% for the mid and top end.

Large Live P.A.

Large Live P.A. systems can often be anything from two-way to five way and can contain massive amounts of drive untis for each seperate bandwidth.

A large Live P.A. system would have two engineers. One for the front of house one for the monitirs mix.

Often delayed loudspeakers are needed.

In a large open air concert say, a person standing 340 meters from the stage will not hear the emitted sound wave until one second has elapsed if he/she is standing 640 meters then two seconds will have elapsed before the sound can be heard.

This is due to the speed with wich sound travels through the air. i.e. 340m/s.

By the time the sound has travelled this distance it has suffered great losses. Therefore, further speakers will be needed for the audience to the rear of a concert.

The sound from these drivers need to delayed in order for the sound emitted from the main drivers to be of the same phase.

Live Live P.A.

Figure 1.7 The basic configuration of delayed loudspeakers.

The delayed signal could come from groups, from the main mix or aux’s etc.

Further points to note:

A rock Live P.A. should be as intelligable as a West end musical.

It should cover the audience evenly allthough this is not always possible.

The system should be visibly in tune with the type of work and surroundings.



Electro voice, The Live P.A. bible.

Tags: , ,

‘Time Stretch’ in Cubase SX

Posted by Giacomo on February 10, 2012
Tips and Tricks / Comments Off on ‘Time Stretch’ in Cubase SX

The ability to change the tempo of an audio recording, without affecting the pitch, was one of the great advantages of the DAW revolution. Many of us, from bedrooms musicians through to Grammy winning producers, rely on having the ability to ‘stretch’ the tempo of an audio recording. This tutorial will focus on some of the ways in which we can manipulate the tempo of audio in Steinberg’s Cubase SX.


The ‘Old Fashioned’ Way

The Time Stretch dialog box in Cubase SX3People who came to SX via VST will remember, with frustration in some cases, this manner of time-stretching an audio file.

You can access this method by left clicking on the audio event/clip you wish to alter and then going to ‘Audio->Process->Time Stretch’

Unlike the other methods of changing tempo in Cubase SX, this method is more ‘mathematical’ and requires you to give information about the source file (such as tempo or number of bars/beats/16th notes etc) This method is not the quickest or easiest way of going about changing the tempo of an audio file, but it does have advantages over the more intuitive methods. You can achieve a greater degree of accuracy for starters, and you can also time stretch to a tempo independent of your current project tempo quickly. This is also a good method for batch processing a number of audio files from one tempo to another.

This method also gives you a number of algorithm options such as MPEX 2, Standard, Drum and Realtime. The MPEX 2 algorithm in particular can produce very good results on both monophonic and polyphonic sources.


The Quick Way

Probably the quickest and easiest way to change the tempo of an audio file in Cubase is to use the ‘Sizing The 'Sizing Applies Time Stretch' option.applies Time Stretch’ option when using the Object Selection tool (i.e. the arrow). You can access this tool in 2 ways, either click on the arrow tool twice – once if it’s already selected – as shown in the image on the right, or press the number 1 key on your keyboard (not on the Num Pad) to scroll through until you see a little clock appear next to the arrow, again seen on the right.

Once you have selected this tool you can now change the length, and therefore the tempo, of an audio file by dragging the ends of it, much like you would change the length of a MIDI part by dragging the ends. If you have ‘Snap’ enabled then the length will automatically snap to whatever Grid Type and value you have defined – if you are working with ready made drum loops then setting the grid to ‘Bar’ will probably make most sense as then the event will snap to exact bar lengths as seen below.

Changing the tempo of an audio event.

The quality of this time stretch is dependent on the algorthim that you choose – to choose the default mode go to File->Preferences->Editing->Audio and choose the required algorithm under the ‘Time Stretch Tool’ heading.

One thing to remember which catches many people out is to remember to reset the Object Selection tool back to ‘Normal Sizing’ after you have finished stretching your audio file.

The ‘Musical Mode’ Way (SX 3 only)

‘Musical Mode’ is a feature new to SX 3 and gives Cubase some of the real time time-stretching that users of ACID and Ableton Live are used to without resorting to changing the file format a la .REX files.

To use Musical Mode you must first use the Audio Tempo Definition Tool to specify the length of the clip in bars and beats, or specify the tempo of the Selecting the Audio Tempo Definition Toolsource file. To do this you must open the Sample Editor by double clicking the event that you wish to time stretch. Once in the Sample Editor you need to turn on the Audio Tempo Definition Tool by clicking on the relevant button on the toolbar as seen on the right.

Once this tool is selected, you should be able to enter information into the ‘Musical Controls’ area of the toolbar, such as Signature, Audio Tempo, Bars and Beats. If these options aren’t available then check they are selected by right-clicking on the toolbar and ensuring there is a tick next to ‘Musical Controls’. In a simi liar way to the Time stretch dialog box you should enter the tempo or the length of the loop in barsThe 'Musical Controls' and beats. If you are working with a loop from a commercial library then often the tempo of the loop will be included in the file name. If the tempo of the loop is not known then you can simply count the number of bars and beats in the loop and enter them into the relevant box and the tempo will automatically be calculated for you.

Once you have correctly inserted this information you can enable Musical Mode by clicking on the ‘Musical Mode’ button which is shown below.

Enabling 'Musical Mode'If you are working with a low screen resolution then sometimes the ‘Musical Mode’ button is not visible on the screen. If this is the case then it is easiest to right-click on the toolbar and temporarily remove some of the toolbar options you are not using. Alternatively you can drag the corners of the sample editor to make it larger then your screen and then move the sample editor so you can see the options – this is quite a messy method however.

Once you have turned on musical mode you can choose an appropriate ‘Warp Setting’ from the drop down box – these are self-explanatory so just choose the one that is nearest to the audio file you are stretching.

Upon closing the Sample Editor you should see that your audio event has now changed in length to fit the tempo of your song. Even more impressive is that if you now change the tempo of your song your loop(s) will automatically change in tempo to match the project tempo – this even works when using the tempo map to define ‘on the fly’ tempo changes!

The eagle-eyed amongst you may have noticed that I’ve not included the Hitpoint/Slice method of loop manipulation in this tutorial. It is fair to say that for most users the methods mentioned above should be suitable for most applications – we will cover the Slicing method in a future tutorial however.

Anyhow, stop reading this and get stretching!

Tags: , ,

Choosing Headphones

Posted by Giacomo on February 10, 2012
Tips and Tricks / Comments Off on Choosing Headphones

A pair of headphones are useful in many circumstances for a sound engineer, whether in the studio or on location. Headphones are available with many different specifications and to the novice it may seem difficult to make the right choice. This AudioCourses article will outline the basic specifications of headphones available today.


Before we go any further it must be noted that we will be talking about circumaural headphones with dynamic drivers/transducers. Circumaural headphones enclose the ear and are the choice in studios all over the world. Alternative forms such as In-Ear and Supra-Aural, are generally not suitable for serious audio work although they are fine connected to an iPod or Walkman for casual listening.

It should also be said that mixing on headphones is undesirable, and headphones should only be used as a reference rather than a replacement for studio monitors.

Closed vs Open

The choice between closed and open is probably the most important decision. Both have their pros and cons so the decision should focus on the environment in which they will be used.

Closed back headphones, such as the Beyer DT 100 excel at both preventing spill from the headphones into the microphone when recording, and also preventing outside noise from reaching the wearer. This means they are the logical choice for musicians when tracking as the lack of spill ensures a cleaner signal and also makes feedback less likely to occur when nearer the microphone. In a live sound environment closed headphones are invaluable. As very little sound is let in from the outside world it allows the engineer to listen to a different source than the FOH mix – for instance the engineer can solo channels and try to identify any rogue signals. For location recording, DJs and broadcast usage, closed headphones are preferable due to the aforementioned reasons.

This type of headphone will often appear to give a good representation of low frequencies but this is tempered by the coloration to the sound that placing a diaphragm in a closed environment incurs. The fit of the headphone onto the ear is also important as to fully benefit from the advantages of closed back headphones a good seal must be made between the cushioned earpiece and the ear. If this seal is not satisfactory acoustical isolation in both directions is lost.

Open back headphones, such as the Beyer DT 990 Pro, do not offer the same level of isolation as their closed back counterparts. Open headphones usually look similar to closed back headphones but with a ‘grille’ on the rear of the enclosure (as can be seen in the picture of the DT 990 Pro). The presence of a grille allows freer movement of sound both in and out of the earpiece. This makes open designs unsuitable in recording situations as spill into the microphone will be excessive and the likelihood of feedback (‘howlaround’) is increased. These characteristics also make open designs unsuitable for most broadcast and location recording.

The area in which the open design excels however is the sound quality. Although the low end may not appear as well defined as closed designs, the reproduction of other frequencies is likely to be more accurate as there is less coloration from the design of the earpiece. These characteristics make open back headphones suitable for critical listening in quiet environments, such as the control room. As mentioned earlier, mixing should not be attempted with headphones, but for analytical listening headphones are very useful during mixdown for finding rogue noise or checking reverb tails.

Frequency Response

This figure details the range of frequencies that the headphones can reproduce. The average humans hearing range is 20Hz-20,000Hz. You will often see headphones will frequency responses that stretch beyond either of these limits. What is critical in interpreting these figures is variation in level over the frequency response. If the headphones can produce 10Hz-40,000kHz but dip or peak in level by 10dB above 10,000Hz then they are not particularly suitable. For serious audio work peaks and troughs should be less that 3dB – this is often represented by a figure such as +/-3dB or ±3dB at the end of the frequency response i.e. 10Hz-30,000Hz +/-1dB.


The Impedance of headphones is measured in ohms (Ω ) and is, crudely speaking, how hard the amplifier must drive the headphones. Most dedicated headphone amplifiers such as the Samson S-amp will be capable of driving most headphones satisfactorily. If you are connecting headphones to equipment such as a mixing console or DJ mixer then you should read the technical specifications to ensure you buy headphones with the correct impedance. Some headphone manufacturers offer a number of different impedances for a given model. The DT 100 mentioned earlier is an example of this.

Other Considerations

There are some other criteria not relating to sound quality that should also be considered when purchasing headphones.

Many high end microphones offer a modular design which ensures that repairs are cheap and parts are plentiful. Non modular headphone designs are often difficult or financially unviable to repair.

The comfort of the headphones is also important and if possible you should try as many designs as you can to find the best balance of sound quality and comfort.

Most headphones terminate into a stereo 1/4″ jack (sometimes with a minijack underneath a 1/4″ adaptor), but some manufacturers produce models with an XLR connector. It is wise to determine which kind of connections are present in your studio and make your choice based on that. Most studio equipment and headphone amplifiers accept 1/4″ jacks, but when using stage boxes there are often XLR’s provided for returns.

Tags: , ,

Bringing MIDI Drums to Life

Posted by Giacomo on February 10, 2012
Tips and Tricks / Comments Off on Bringing MIDI Drums to Life

When used in conventional guitar based songs, MIDI drums can sometimes sound robotic or lifeless. In this short Audiocourses tutorial we’ll examine a few ways to breathe some life into your MIDI drum parts.


For this tutorial we are going to be using Cubase SX as our sequencer (although similar results can be obtained from most of the mainstream sequencers) and triggering drum samples from a software sampler such as Battery, Halion etc If you don’t have a software sampler then you can find many free options on the KVR Website.

Please note that you will find it hard to recreate a natural drum sound if you are using inappropriate drum samples (i.e. Roland 909). For the best results you need one shot samples of real drums. There are sites online that offer free or cheap drum samples, and also many music technology magazines give samples away on cover-mounted DVD’s etc.

The Basic Drum PatternLet’s start with a very straight beat programmed directly into Cubase’s drum editor with all note velocities left unchanged (i.e. 100) You can see this in the screenshot on the left.
I’m going to go for a British indie sound a la Snow Patrols ‘Chocolate’ and The Stone Roses ‘I am the Resurrection’
Example 1

As you can hear there is a definite robotic feel to the drums and they don’t sound particularly lifelike at all. Most obvious is the uniform velocity of all the drums – a real drummer never hits the drums with such uniform velocity so lets try to make some changes in the velocity to give more of a real feel.

As you can see from the screenshot to the right, I’ve made variations in all of the velocities to try to get a bit more realism into the sound
Example 2 Velocity Edited Hats

That sounds a lot better than the original take, but for my ears it is still too robotic due to the overly-perfect timing – if we were to loop that bar over and over again it would sound more obvious that it was programmed.

A drummer, being human and therefore imperfect(!), will always have slight changes in his/her playing in terms of timing so we need to replicate that. We could go through the entire song and move the positions of every single drum hit by a small amount but that would be very time consuming and could also make it sound overly sloppy. What we need to do is have a random element to the positioning that has upper and lower boundaries to prevent the timing becoming ‘bad’ rather than ‘human’.

Luckily for us Cubase allows this to be done with the minimum of fuss! Select our MIDI drum track and choose Track Parameters from the Inspector. You should see a couple of drop down boxes underneath the heading Random and these allow us to choose parameters to have a randomized feel. In this instance we want to choose position from the first drop down box and then determine the range that our random positioning can work within. When working with Position we use a unit called ‘Ticks’ which are very small units indeed – by default there are 120 ticks to a 1/16th note although this can be changed in the Cubase settings. For this style of drum pattern I’d imagine that a range of about 6-10 ticks should give us a human feel. Bear in mind that a drummer can be slightly early or late with timing so to give a range of 10 we should set the min limit at -5 and the max limit at 5 giving us a range of 10 with 0 (i.e.our original positioning) in the middle of our range. For other styles of music where the drummer may ‘push’ the beat you could set the limits with a lower min value and a higher max value.

Whilst we’re adding some random values, we could also add some ‘randomness’ to our velocity to further increase how human the drums sound – choose Adding Random Elements To Our Velocity and Positionsvelocity from the drop down menu and then choose the min and max values – in this instance i chose a min of -4 and a max of 4.
We should now hear a subtle improvement in our drum loop:
Example 3
Listen for how the hats and the snare, which fall on the same divisions, are now not quite as tight as they were but are still ‘in time’.

At this point we could move on to the sound of the drums. Although they do not sound bad, they are slightly ‘weak’ sounding and could do with a bit more body and punch.

What I decided to do with this example was place a distortion plug-in as an insert on the drum output and then place a compressor in the next insert slot. You can then use a combination of the gain and output levels in the distortion plug-in to drive the compressor. As I don’t want a complete Garage band style drum sound I have gone easy on the distortion – I have used the bundled Quadrafuzz for this as you can tweak each frequency band before distortion.

After being distorted the signal is fed into a compressor with an attack of around 15ms and a release of about 130ms – i tweaked the ratio and threshold until I got a sound I liked – this sound ended up being the result of 10dB(!!!!) of compression on peaks but this really emphasized the ring of the snare and gave that very fat and warm indie sound I was trying to achieve but with some pumping. This may be too extreme for some tastes so as always use your own ears;
Example 4

Finally I wanted to bring a bit of coherence to the kit so I set up a plate style reverb with about 1s decay, no pre-delay and with the low end rolled off in the EQ section of the return channel to prevent the kick drum ‘muddying’ the reverb return. I sent some of the kit via an FX send, again using my ears rather than looking at how much i was sending on the meter. As you can hear the reverb helps to bring the kit together into a defined ‘space’:
Example 5

Hearing our final drum sound in context with some typically ‘indie’ instrumentation should reveal how successful our attempts have been;
In Context

Hopefully this has demonstrated some of the ways you can add realism to your MIDI drum parts – each project requires a different approach so the settings used here should only be considered a guide – as always your ears are the greatest judge!

Tags: , , ,

Recorder Man Technique

Posted by Giacomo on February 24, 2008
Tips and Tricks / Comments Off on Recorder Man Technique

Drums can be such a tricky instrument to “nail”, that often less microphones can in fact be more!

If you’ve ever tried to record drums with lots of microphones you will be aware that a “phase issue” typically rears its ugly head and can bring on some serious comb filtering, which to cut a long story short sounds nasty!

The RecorderMan Technique is one such method which is quick, easy and phase coherent, which usually always yields good results, watch the video and give it a try!


Tags: , ,

Fussball hallenschuhe

  • fodboldtrøjer børn futbalove dresy na predaj Billiga Fotbollströjor Barn replique montre suisse replique montres de luxe