Recording LP Albums On Your Computer

Posted by Giacomo on February 10, 2012
Tips and Tricks / Comments Off

Most of us born before the 90s have an analog record collection that can best be described as being somewhere between ‘Spartan’ and ‘Now What Am I Going To Do With All These Albums?’. With the miniscule cost of digital media these days, the desire seems to be to transfer your analog record collection to digital format on CD-R disks or possibly to an MP3 library on your computer or maybe even both. How is this accomplished?

 

I’m glad you asked that question! Here is a list of the items you need to accomplish the transfer of your LPs to your computer: some albums, a turntable with a reasonable cartridge, a phono preamp, a pair of RCA cables, a computer with sufficient hard drive space, a software application that can record audio directly to the hard disk in the computer, a hardware interface (AKA, a soundcard), a software application that can record digital audio files to the CD burner in the computer, and some blank digital media. Let’s briefly discuss each one of these items.

Before you start any recording, you might want to clean up your albums so that they will sound their best for the recording process. If the albums have been sitting in a box for several years, you can bet they have dust all over them (especially sitting deep down in the grooves). Invest in a commercial record cleaner such as the Bib Groov-Kleen Audiophile ($20), the Discwasher 1006 ($15) or the Radio Shack 42-117 Pro Record Cleaner ($9). Heavily thrashed albums may require a professional cleaning if you are really serious about this process.

Any turntable that can hold a fairly constant speed and has a phono cartridge in reasonable shape can be used for this task. If the turntable speed is a problem, try replacing the belt if it is a belt-drive turntable. If the phono cartridge is shot, it is best to replace it with a new one to achieve the best results. Make sure the cartridge is properly aligned and that the tone arm counter-weight is set correctly for proper tracking with that particular cartridge.

The problem most people encounter with a turntable is that its output can’t be plugged directly into the line level input of any recorder. Most turntable cartridges put out a signal on the order of just a couple of milliVolts (mV). The turntable also can’t just be plugged directly into the mic inputs on a mixer, recorder or computer, even though they are setup to handle a signal with an amplitude of only a couple of mV. The reason is that when LP vinyl records are recorded, they have a special equalization curve (called the RIAA EQ curve) applied to the signal. The RIAA is the Record Industry Association of America. This special EQ curve is used to limit low frequencies and accentuate high frequencies when the disk is made to account for the limitations of the vinyl LP medium. Then when the LP disk is played back, the opposite EQ curve is applied to flatten the signal out again (i.e., accentuate the low frequencies and reduce the high frequencies). This special EQ resides in what is known as a phono preamplifier.

You will need a phono preamplifier to record LPs from a stereo turntable, but many new receivers and amps do not have a phono preamp built in. The least expensive standalone phone preamp I have seen is the MCM Electronics P/N 40-630 for $13.95. This will fit the bill nicely in allowing you to get your audio from LP to computer. Call them at 800-543-4330 or go to mcmelectronics. Another low-cost phono preamp is the Rolls VP29 for about $55. I recommend this one for the best price/performance ratio. You can find it at several places, including bswusa.

Generally, most turntables come with their own set of stereo cables, and you will connect these to the phono preamp inputs. If there is a ground wire coming from the turntable, connect it to the ground screw on the phono preamp (if there is one) or connect it under a chassis screw on the phono preamp. The sound card you have in your computer dictates what type of cable you need to go from the phone preamp outputs to the sound card input jack(s). Take a look at the sound card audio input jack(s). It may be a stereo 3.5mm jack or it may be a pair of RCA jacks. Buy a set of cables that will allow you to connect from the phono preamp RCA output jacks to the particular input jack(s) on the sound card. Make sure you use the Line In jack on the sound card, not the Mic In jack. If possible, try to keep these audio cables 6 feet long or less.

The computer you use can be a PC or Mac. All that is important is that you understand how to work the computer and that the computer has enough horsepower RAM memory and hard disk space to run the software application you have chosen for recording your albums. For reference, every stereo minute of uncompressed digital audio requires 10MB, so 1 hour of digital audio will take up 600MB on your hard disk drive.

The software application you use is a matter of choice and must be compatible with the computer platform (PC or Mac) you have. If you bought a decent sound card for your computer, then they usually come bundled with some sort of sound recording program that will allow you to record external audio from the Line In jacks and digitize it to your hard drive. If you don’t have a sound recording application on your computer, you can get one off of the Internet. Wave Repair has an audio recording application that you can download as Freeware at waverepair. Total Recorder is available at highcriteria for $12. Audio MP3 Sound Recorder can record any audio streaming through your sound card to your hard drive for $15 at mp3-recorder. CD Wave is available at cdwave for $15. LP Recorder is available at cfbsoftware for $50. For the Mac, I really like Micromat’s SoundMaker for $50 because it also includes a powerful audio editor (micromat soundmaker).

As far as PC sound cards go, I would avoid the no name El Cheapo models. Stick with the name brand models that come with powerful bundled software such as the Creative Sound Blaster Live 5.1 ($40). If your wallet is bigger, try the Creative Sound Blaster Audigy or Audigy 2 family of products. Other sound cards can be found at nextag. Macs are all already set up from the factory with their own built-in sound card capabilities, but you can add higher quality audio interfaces via USB if you desire (all it takes is money). For the Mac, you can add USB interfaces from Griffin Technology ($35 at griffintechnology) or from MacAlly ($49 at macally), however, you will still need a RIAA phono preamplifier.

Hook up all the connections, fire up the computer, launch the sound recording software application, put on an album and monitor it with your computer speakers. If you are getting hum, make sure that all the grounds are connected and that all the audio cables are seated properly in their jacks (and that they are free from oxidation). Other things to try are making sure everything uses the same AC outlet. If you still have the problem, try disconnecting other cables from the computer system that are not being used in this recording session (such as cable modem connections, TV cable connections, and other digital bus connections).

Find the loudest part on the album (if possible) and set the recording level so that this section of audio is a few dB below the 0dB level on the recording level meters of the audio recording software application. If the incoming audio goes over the 0dB level on a digital recording, it will definitely result in ugly distortion and ruin your recording. (Digital audio recording is not like analog audio recording on a cassette deck in this regard.) Now you can begin recording the album.

If the recording software you are using also has an editor associated with it (or if you have a digital audio editor as part of another program), you can record the whole album side and then split the tracks up into separate audio files later with the editor. This is quite a bit less tedious than recording one album track at a time and then creating a digital audio file of just that one track, however it can be done this way. Continue recording until you have converted the whole analog album into one or more digital audio files.

Once you have converted your album tracks into WAV (on a PC) or AIFF (on a Mac) uncompressed digital audio files, then you can burn an audio CD-R and/or you can convert those uncompressed audio files to the MP3 compressed audio format. To burn an audio CD-R, you will need a CD-R or CD/RW drive in your computer and the software that came bundled with it to perform the actual creation of the audio CD-R. Note that this CD Burning software is not the same as the Sound Recording software you initially used to record the analog audio into your computer. You could also buy one of the commercial programs available now such as Roxio’s Easy CD & DVD Creator 6 to accomplish this task. To convert an uncompressed audio file to the MP3 format, you will need the proper conversion software to accomplish this, such as MP3 WAV Converter (americanshareware), M3’s Encoder ( mthreedev), AudioConvert (AudioConvert), or any of the MP3 encoders for PC, Mac and Linux machines offered at jumbo..

Copyright 2003 Pacific Beach Publishing (John J. Volanski is an electrical/audio engineer who has recently written the book Sound Recording Advice. It is available at soundrecordingadvice.com or at fine bookstores everywhere in the USA.)

Tags: , , ,

Mono Compatibility and Phase Cancellation

Posted by Giacomo on February 10, 2012
Tips and Tricks / Comments Off

When we listen back to music, whether it is through headphones or loudspeakers, what we are hearing are changes in air pressure caused by the movement of a diaphragm (i.e. the speaker cone).

 

The relationship between these differences in a stereo mix can have many unpredictable outcomes for the audio engineer, especially if a mix requires mono compatibility. This AudioCourses article will outline some of the issues that can occur

A simple way of imagining this phase relationship is to use a sine wave as an example. With our stereo sine wave you can see that the peaks and troughs of the waveform occur at the same time throughout the signal as illustrated in the screenshot below (left channel is top waveform, right channel is bottom waveform)

To hear this sine wave click here.

In speaker terms, the 0 on the vertical axis is when the speaker cone is ‘at rest’. When the waveform goes above 0 the speaker ‘pushes out’, whilst when the wave drops below the 0 the speaker ‘pulls inwards’. It is this movement of the cone which causes the changes in air pressure which we interpret as ‘sound’. Using the scale on the vertical axis, you can see that the waveform peaks at 100+/-. As the signal is ‘in phase’, we will hear the signal at the same amplitude at the same time in both ears.

Now let’s see what happens if we invert the polarity of one channel (in the case of a sine wave this is the same as changing the phase by 180° on one channel);

To hear this sine wave click here.

From the screenshot above it is clear what has happened to the waveform – one channel, in this case the right channel, has been turned ‘upside down’. The result of this is that the peaks and troughs in this waveform occur ‘out of phase’. Thinking in terms of speakers, what will happen now is that when your left speaker cone is fully pushed out, the right cone will be fully pulled in, therefore our ears are hearing ‘opposites’. Looking at the vertical axis again it is apparent that when we are hearing the signal at 100 in one ear, the signal will be -100 in the other. If you compare the 2 MP3 clips of the sine wave you should hear the the in phase waveform sounds ‘solid’. However the out of phase waveform will sound less ‘solid’ due to the conflicting levels arriving in each ear although it may appear more ‘stereophonic’ than the essentially mono original waveform. This apparent ‘stereo’ effect is one of the artifacts of poor phase relationships – the stereo image does not appear accurately to the listener. With a sine wave this is not as apparent, however with a full musical mix it is more disconcerting. Low frequencies are particularly affected by phase problems such as this.

Whilst it might appear that this is a problem caused during the mixing and or mastering process, think again. A surprising number of people in home studios have their speakers wired incorrectly which will ensure that one speaker is consistently 180°out of phase with the other – always ensure that your speakers are wired correctly to your amplifier. This goes for home stereos and entertainment systems too. Trying to mix with a monitoring setup that has inversed polarity on one channel is an absolute nightmare!

The most serious issue with inverted phase is when our audio material is played in mono (this could be on someone’s transistor radio, music on hold system, mono mp3s for web streaming etc) When a stereo file is converted into mono, the 2 channels are summed together. With an in phase signal, such as our original waveform, the channels reinforce each other due to their identical characteristics. A signal with a 180° phase shift, such as our second example, will actually result in the signal ‘disappearing.’ Looking at the values on the vertical axis it is easy to see why – an inverted phase signal will always have equal values of opposite polarity (+/-) If we were to add 50 to -50 we would end up with 0, and therefore silence! To prove the point here is the out of phase sine wave bounced down to mono within the audio editor;

As you can see there is nothing, zero, nada, zilch. The waveform sits completely on the 0 on the vertical axis. Of course if you play this back you will be met with silence as the speaker will not move and therefore no air will be displaced.

There are still a lot of mono playback systems in use so this should reinforce the need to regularly check your mixes in mono – most software packages allow you to do this in the master section of the mixer. Many analogue desks have either a mono button in the master section or a mono sum output so you can connect a mono speaker.

In this article we have talked about sine waves and inverted phase. Sine waves are a great way to demonstrate this principle but the sounds you are likely to record are much more complex waveforms, and the phase relationships are less likely to be 180° as phase relationships are frequency dependent. With more harmonically complicated waveforms artifacts such as phasing/comb filtering will occur which often manifest as periodic reinforcements and attenuations of the signal. It is not unknown for some software synth presets to output signals which all but disappear when mixed to mono.

To illustrate the effects of inverted polarity on one channel of a stereo mix, here are some demonstrations;

loop #1 – normal phase

loop #2 – inverted phase on right channel

loop #3 – mono mix of loop#2

The difference between the first 2 loops should be most obvious in terms of the ‘solidity’ of the stereo soundscape and the bass frequencies.

In the mono mix of the loop you can still hear some of the original audio but the majority has ‘disappeared’. Audio signals which were panned centrally (i.e. equal in both speakers) have been cancelled out whereas signals that were off centre (including FX returns from stereo reverbs and similar) or truly stereo have been retained.

To reiterate, the above examples are extreme examples of phase problems but they should give you an indication of some of the problems that poor phase alignment can cause.

There are some common recording situations when you should check the phase of the signals in your DAW;

The easiest one to visualize is when recording a source both acoustically and electronically at the same time. A great example of this is when a bass guitar is sent via a DI to the desk, and a microphone is placed in front of the bass amplifier. The short delay in the acoustic signal reaching the microphone before being converted into a voltage will inevitably cause phase problems. In this instance it is sensible to move the signal recorded by the mic back in time so that the phase relationship is maintained.

A similar issue occurs when using a mic both above and below a snare drum (or any drum for that matter). The mic on the top of the snare will react to the stick hitting the snare skin and the consequent movement of the skin away from the mic. The mic on the bottom of the drum will instead react to the skin moving towards it due to the contrasting positioning. Again, opening up your DAW and lining up the waveforms so they reinforce each other may provide a more satisfactory result.

Recording drums with many microphones such as overhead and close mics can cause myriad phase problems between all the mics. Again, zooming in on the waveforms within your DAW and shifting some of the waveforms around may provide a more ‘solid’ soundstage.

Tags: , ,

Live PA – Basic Sound Engineering Overview

Posted by Giacomo on February 10, 2012
Tips and Tricks / Comments Off

Live Public Address (Live Live P.A.) systems come in many different shapes and sizes and can often confuse the newbie into not knowing even just the basics. This article is aimed at giving a basic overview of non-specific equipment configurations in an attempt to de-mystify some of the typical errors a newbie can make with Live P.A. systems.

 

The function of a Live P.A.

Two types of Live P.A.

Reinforcement

  • This is for speech or music which would sound good in a small room without artificial assistance.
  • In the case of a classical guitar which is a very quiet instrument, this natural sound is only good for an audience of around 200 – 300 depending on room size.
  • For an audience of 500 – 600 it is possible to reinforce the sound so that everyone can hear clearly and to most people it will still sound natural.

Amplification

This is where the original sound is insignificant in comparison with the amount of sound coming from the Live P.A.

The aims of public address:

1.) To provide adequate volume ( not necessarily loud ).

2.) To provide adequate clarity.

A Thought on Acoustics

A discussion on sound reinforcement is impossible without a mention of acoustics.

The free field

If the venue is an outdoor event then the engineer need not concern himself/herself a great deal with acoustics as this is the ideal situation.

Sound in the open air travels away from the source and keeps going until it’s energy is used up ( inverse square law ). There are no walls for the sound to bounce off and return to interfere with the next wavefront.

Indoor Live P.A.

Sound behaves in much the same way as any other wave, it bounces off walls (reflects), and bends around them (diffracts), and cannot pass directly through materials. Therefore speaker placement becomes important as does speaker coverage.

Consider Figure 1.0.

The sound waves being ommited from the cabs have a coverage of 120 degrees therefore it can be seen that there will be obvious ‘blind spots’ in the coverage.

Figure 1.0 Shows a small venue and a typical coverage angle of a driver.

It should be pointed out that the directionality of sound waves is somewhat frequency dependent and that the above diagram shows a potential problem for high frequencies.

High frequencies have a smaller wavelength than low frequencies hence they are very directional ( λ = v/ƒ ), λ = wavelength, v = velocity, ƒ = frequency .

Objects placed in the Live path of HF block them, whereas LF tend to bend (diffract) around them. Therefore a subject positioned behind the wall in Figure 1.0. will hear an attenuation in HF hence a dull sound.

Golden rule number 1.

Always ensure nothing is in the line of sight of a HF driver otherwise you may encounter loss of HF.

Reflection and phase cancellation (comb filtering )

Consider Figure 1.1.

Live Live P.A.

Figure 1.1 Shows the Live path of a sound wave, the venue is assumed to have reflective surfaces and after a period of time the wave can be seen to have travelled around the room bouncing off the surfaces and crossing over other sound waves.

This situation can create what is know as ‘comb filtering’. As you can imagine the sound takes time to travel around the room (340 metres/sec) and if the reflected wave (having being time delayed) coincides with another wave whose polarity is the inverse or a fraction of, then cancellation will occur.

Conversely, if the combination of merging waves have the same polarity then addition will take place.

The name comb filtering is adopted because looking at the frequency response of the product, the shape of the teeth on a comb can be seen. Showing areas of addition and subtraction.

Live Live P.A.

Figure 1.2 The peaks and troughs of comb filtering can be seen in this frequency response plot.

Golden rule number two.

Ensure the minimum amount of reflection by pointing the speakers in a suitable direction.

The main thing is to keep comb filtering to a minimum this is sometimes easier said than done as most venues have reflective surfaces. There will always be pockets of ‘bad sound’ and pockets of ‘good sound’.

If you wonder round a venue and listen to the mix you will find these spots, it is your job as an engineer to keep the ‘bad spots’ to a minimum through speaker positioning, coverage and equalisation.

In professional venues architectural acousticians get paid lots of money to design environments which produce ‘good spots’ throughout the venue by such methods as absorption paneling.

Standing waves

Standing waves are a result of sound being reflected back and forth between two parallel surfaces.

As the first wave reflects it meets a newly arriving wave and the result can be that a stationary wave is produced which resonates at a frequency dependant on the transmitted waves and the distance between the Live parallel surfaces.

The wavelength of the transmitted waves in relation to the distances between the Live parallel surfaces is important for consideration then.

If this distance equals the wavelength or a ratio of it then a standing wave could be be made to oscillate.

Example

The wavelength of a 20 Hz wave is 17 metres, if this wave was transmitted between two Live parallel surfaces whose distance was 17 metres an oscillation could occur.

Standing waves can be a problem in venues where the dimensions of the venue coincide with Live particular wavelengths.

At LF, standing waves can ‘creep up’ on the engineer as they gather energy and appear to ‘feedback’, which could of course occur if the standing wave was picked up by the microphones on stage and amplified.

Careful use of room equalisation and speaker positioning can combat standing waves to a degree.

What’s wrong with a lot of Live P.A.’s

  • Low efficiency speaker systems
  • cure – ensure you have efficient speakers.
  • Not enough amplifier power
  • cure – ensure you have plenty of amp power
  • Poor frequency response
  • cure – ensure all components in the chain have a ‘flat’ response
  • Miss half your audience
  • cure – ensure you have enough speakers that are angled to cover everyone
  • Room reverberation swamps the sound
  • cure – choose speakers with suitable directional and dispersion qualities, thus avoiding reflective surfaces

Basic systems for two different sized rooms

Example 1

A small sized room having the dimensions of around 30 by 30 by 10 feet.

Live Live P.A.

Live Live P.A.

Figure 1.3

The system in block diagram form.

Live Live P.A.

Figure 1.4

This would be a suitable setup giving adequate coverage.

The power amp would be rated at around 150/200 watts per channel and the speakers would be full range.

Example two

A medium sized room having the dimensions of around 50 by 40 by 15 feet.

Live Live P.A.

Live Live P.A.

Figure 1.5

Live Live P.A.

Figure 1.6

This system is known as a two-way system, and for this room a total power of around 1KHZ would be adequate. The audio spectrum is split in two at around 250 Hz. Thus two power amplifiers are nessessary. Percentage wise the Low end would have around 65%. Leaving a further 35% for the mid and top end.

Large Live P.A.

Large Live P.A. systems can often be anything from two-way to five way and can contain massive amounts of drive untis for each seperate bandwidth.

A large Live P.A. system would have two engineers. One for the front of house one for the monitirs mix.

Often delayed loudspeakers are needed.

In a large open air concert say, a person standing 340 meters from the stage will not hear the emitted sound wave until one second has elapsed if he/she is standing 640 meters then two seconds will have elapsed before the sound can be heard.

This is due to the speed with wich sound travels through the air. i.e. 340m/s.

By the time the sound has travelled this distance it has suffered great losses. Therefore, further speakers will be needed for the audience to the rear of a concert.

The sound from these drivers need to delayed in order for the sound emitted from the main drivers to be of the same phase.

Live Live P.A.

Figure 1.7 The basic configuration of delayed loudspeakers.

The delayed signal could come from groups, from the main mix or aux’s etc.

Further points to note:

A rock Live P.A. should be as intelligable as a West end musical.

It should cover the audience evenly allthough this is not always possible.

The system should be visibly in tune with the type of work and surroundings.

 

References:

Electro voice, The Live P.A. bible.

Tags: , ,

‘Time Stretch’ in Cubase SX

Posted by Giacomo on February 10, 2012
Tips and Tricks / Comments Off

The ability to change the tempo of an audio recording, without affecting the pitch, was one of the great advantages of the DAW revolution. Many of us, from bedrooms musicians through to Grammy winning producers, rely on having the ability to ‘stretch’ the tempo of an audio recording. This tutorial will focus on some of the ways in which we can manipulate the tempo of audio in Steinberg’s Cubase SX.

 

The ‘Old Fashioned’ Way

The Time Stretch dialog box in Cubase SX3People who came to SX via VST will remember, with frustration in some cases, this manner of time-stretching an audio file.

You can access this method by left clicking on the audio event/clip you wish to alter and then going to ‘Audio->Process->Time Stretch’

Unlike the other methods of changing tempo in Cubase SX, this method is more ‘mathematical’ and requires you to give information about the source file (such as tempo or number of bars/beats/16th notes etc) This method is not the quickest or easiest way of going about changing the tempo of an audio file, but it does have advantages over the more intuitive methods. You can achieve a greater degree of accuracy for starters, and you can also time stretch to a tempo independent of your current project tempo quickly. This is also a good method for batch processing a number of audio files from one tempo to another.

This method also gives you a number of algorithm options such as MPEX 2, Standard, Drum and Realtime. The MPEX 2 algorithm in particular can produce very good results on both monophonic and polyphonic sources.

 

The Quick Way

Probably the quickest and easiest way to change the tempo of an audio file in Cubase is to use the ‘Sizing The 'Sizing Applies Time Stretch' option.applies Time Stretch’ option when using the Object Selection tool (i.e. the arrow). You can access this tool in 2 ways, either click on the arrow tool twice – once if it’s already selected – as shown in the image on the right, or press the number 1 key on your keyboard (not on the Num Pad) to scroll through until you see a little clock appear next to the arrow, again seen on the right.

Once you have selected this tool you can now change the length, and therefore the tempo, of an audio file by dragging the ends of it, much like you would change the length of a MIDI part by dragging the ends. If you have ‘Snap’ enabled then the length will automatically snap to whatever Grid Type and value you have defined – if you are working with ready made drum loops then setting the grid to ‘Bar’ will probably make most sense as then the event will snap to exact bar lengths as seen below.

Changing the tempo of an audio event.

The quality of this time stretch is dependent on the algorthim that you choose – to choose the default mode go to File->Preferences->Editing->Audio and choose the required algorithm under the ‘Time Stretch Tool’ heading.

One thing to remember which catches many people out is to remember to reset the Object Selection tool back to ‘Normal Sizing’ after you have finished stretching your audio file.

The ‘Musical Mode’ Way (SX 3 only)

‘Musical Mode’ is a feature new to SX 3 and gives Cubase some of the real time time-stretching that users of ACID and Ableton Live are used to without resorting to changing the file format a la .REX files.

To use Musical Mode you must first use the Audio Tempo Definition Tool to specify the length of the clip in bars and beats, or specify the tempo of the Selecting the Audio Tempo Definition Toolsource file. To do this you must open the Sample Editor by double clicking the event that you wish to time stretch. Once in the Sample Editor you need to turn on the Audio Tempo Definition Tool by clicking on the relevant button on the toolbar as seen on the right.

Once this tool is selected, you should be able to enter information into the ‘Musical Controls’ area of the toolbar, such as Signature, Audio Tempo, Bars and Beats. If these options aren’t available then check they are selected by right-clicking on the toolbar and ensuring there is a tick next to ‘Musical Controls’. In a simi liar way to the Time stretch dialog box you should enter the tempo or the length of the loop in barsThe 'Musical Controls' and beats. If you are working with a loop from a commercial library then often the tempo of the loop will be included in the file name. If the tempo of the loop is not known then you can simply count the number of bars and beats in the loop and enter them into the relevant box and the tempo will automatically be calculated for you.

Once you have correctly inserted this information you can enable Musical Mode by clicking on the ‘Musical Mode’ button which is shown below.

Enabling 'Musical Mode'If you are working with a low screen resolution then sometimes the ‘Musical Mode’ button is not visible on the screen. If this is the case then it is easiest to right-click on the toolbar and temporarily remove some of the toolbar options you are not using. Alternatively you can drag the corners of the sample editor to make it larger then your screen and then move the sample editor so you can see the options – this is quite a messy method however.

Once you have turned on musical mode you can choose an appropriate ‘Warp Setting’ from the drop down box – these are self-explanatory so just choose the one that is nearest to the audio file you are stretching.

Upon closing the Sample Editor you should see that your audio event has now changed in length to fit the tempo of your song. Even more impressive is that if you now change the tempo of your song your loop(s) will automatically change in tempo to match the project tempo – this even works when using the tempo map to define ‘on the fly’ tempo changes!

The eagle-eyed amongst you may have noticed that I’ve not included the Hitpoint/Slice method of loop manipulation in this tutorial. It is fair to say that for most users the methods mentioned above should be suitable for most applications – we will cover the Slicing method in a future tutorial however.

Anyhow, stop reading this and get stretching!

Tags: , ,

Low Frequency Oscillator (LFO)

Posted by Giacomo on February 10, 2012
Tips and Tricks / Comments Off

This content is brought to you by Audiocourses dot com

A Low Frequency Oscillator, or LFO for short, is a technique used in sound synthesis and sampling to create many different types of effect.

 

As the name suggests an LFO operates at low frequencies, anywhere from 0.01Hz up to around 200Hz, and unlike conventional oscillators is not intended to be a sound source per se, but rather a way of modulating other parameters.

For instance, if a tremolo (rhythmic changing of volume) effect was required, a LFO could modulate the amplifier section of a synth/sampler and the volume would change at a rate determined by the frequency of the LFO – for instance, a LFO rate of 10Hz would see the volume change at a rate of 10 times per second.

An LFO can be routed to many different parts of a synth or sampler to create many different effects. A rhythmic wah effect could be created by sending the LFO output to a filters cutoff control. On modern digital and software based synths and samplers, the routing possibilities are almost endless and virtual instruments such as z3ta+ by RGC:Audio push the boundaries of synthesis with their modulation matrixes.

Another common use of LFO is to modulate the pitch (creating vibrato) of an oscillator, but have the LFO rate dictated by a controller such as the modulation wheel – this allows keyboard players to inject some of the feel of an instrument such as a guitar into their playing.

The waveform of an LFO is often selectable from standard waveforms such as a sine, saw, triangle and square.

Below are some audio files showing the effect of an LFO on various parameters.
On all examples the LFO is set to aprox 12Hz and to a Triangle waveform:

No LFO Modulation
LFO modulating Pitch (vibrato)
LFO modulating Amplitude (tremolo)
LFO modulating filter cutoff (wah style effect)

Tags: , ,