EQ Automation on Incidental Effects

using EQ automation to fade incidentals in and out goes a long way to achieving a natural sound

Creating more realistic incidental effects is one of the most important things I’ve learnt over this period of writing music full time… and when I say ‘incidental effects’ I’m referring to background effects which enhance depth, or create additional emotion or tension in music.  In electronic dance music these are also often referred to as swells, drops, sweeps etc… and at a fundamental level can be implemented using faded-in white noise (for a swell) and a crash cymbal (for a drop).

These kind of sounds are interesting because the listener doesn’t usually explicitly notice them, but will notice the absence of them, or notice them in a bad way if they’re inappropriately used, or sound unnatural.   Having appropriate, smooth, and natural sounding incidentals is a key factor to making your music sound like it’s professionally produced rather than sounding like it was produced in a bedroom.

At a basic level you’ll usually want your incidental sounds to sound relatively natural, and using EQ automation to fade incidentals in and out goes a long way to achieving this.  In the physical world, sounds that are further away from us are perceived as having a rolloff of low and particularly high frequencies, as compared to the same sound emanating from a closer position.  If we take the aforementioned white noise swell and crash drop as an example… a basic sequencing of this would fade in the white noise (using volume automation), and let the crash hit and fade out naturally at the peak point.  The following clip gives an example of this over a rough loop idea…

This sounds OK, but also somewhat obvious… i.e. the listener will be subconsciously aware of the white noise right from the point where it starts.  Because these types of effects have been used so much, and for so long in electronic dance styles, used as above, you run the risk of it sounding predictable and uninteresting to the listener (something along the lines of ‘ah, there’s the white noise, so I guess a peak point is coming!’).  By using an automated low pass or high shelving filter along with volume to fade the white noise in, it sounds more like the natural physical world, and also kind of ‘sneaks up’ on the listener… i.e. comes in much less obviously and hence the listener doesn’t overtly notice the sound so much, but is still drawn into the effect.  Using an upward swept high shelf filter on the white noise (plus a little downward sweep on the crash) sounds like this. ..

To me it’s a subtle, but also significant difference difference, and a first step towards getting more realistic incidentals, and an overall more professional sound.  Increased realism could then be achieved with panning, and additionally automating your reverb sends,  to have more reverb when the sound is ‘further away’.

As mentioned I’ve picked up many small techniques for improving incidentals during this year, so this will be the first of several posts on the subject.

Paralyzed by Choice

Imposing artificial limits to spark creativity

As mentioned in my last post, I’m currently in the middle of coming up with ideas for a couple of new tracks.  If I’m trying create a melodic or percussive pattern, there are an infinite number of combinations of properties of sound which could make up that pattern… i.e. varying rhythm, length, envelope, pitch, density, etc… and this is without considering the sound’s timbre.  To that point, when producing electronic music ‘in the box’, ‘where to start’ can be hard to decide.  I think the number of instrument plug-ins I own is very conservative compared to friends who are producing, and other artists I read about in magazines.  Yet, if I want to start making a lead or bass sound, I’ve got over 10 virtual synths to choose from, and that doesn’t include my lone hardware synth, nor the ones that came with Reaper.  It’s easy to be so overwhelmed with choice, you don’t even know where to start.

This problem is not new to electronic music.. it’s something that musicians and artists have faced ever since there have been musicians and artists.  For lyric and song writers, one remedy for this situation is the the ‘cut up technique‘… apparently used by numerous famous musicians including David Bowie and Kurt Cobain.  When a song writer can’t find a starting point, they take a newspaper or similar, cut out a bunch of random words, mix them up, and write a song using only those words.  Imposing an artificial limitation, and then forcing yourself to work within that limitation is a proven way to ignite inspiration.

Over the last 6 months, I’ve found that equivalent techniques of imposing some kind of artificial limit on your choices can really help to get things moving when you’re  stuck for ideas. For example in the  aforementioned case of trying to come up with a bass or lead line,  I’ll pick just one instrument, and resolve myself to making the part using just that instrument.

Similarly, if I’m looking for a percussive sound… say a hihat sample… I’ve got at least 6 or 7 sample packs which contain decent hi hat sounds… to audition all of them could potentially mean cycling through close to 1000 samples. What I’ll often do is restrict myself to one sample pack, and decide that ‘I have to find a decent sound within just this  pack’.

In current music production it’s very easy to get paralyzed by an over abundance of choice.  Sometimes artificially limiting this choice can be a good antidote.

Waiting for Inspiration

You need patience, and the confidence to know that eventually the really good idea will come

Have just completed one track, and now working to come up with ideas for the next one, I’ve shifted from the more methodical, detailed, and predictable discipline of mixing, to the far more creative and abstract process of writing.  Having discipline and persistence with the writing part (especially when it feels like no ideas are coming) has been one of the more difficult aspects of music production I’ve had to adjust to during this year.  I think a big reason for this is that it’s quite far removed from my usual work as a software engineer.  With software engineering generally speaking, getting results is simply a product of time… unless you’re working in really cutting edge technologies or research, if you put in a full day’s work you can expect to get a proportionate amount achieved (and potentially the general feeling of satisfaction stemming from that).  Hence it was a very different experience for me back in the early months of this year, the first time I committed to a whole day of writing and came out with absolutely nothing at the end!  Then, moreso when I spent the good part of a week going a fair way down the road of putting together a track, only to end up deciding it wasn’t going anywhere, and shelving it.  This necessitated a big adjustment to the approach to, and expectations of work, and for me was one of the toughest parts of starting to write music full time… it required a lot of persistence to overcome the disappointment of spending time on something, and feeling like I wasn’t achieving.

One thing that helped was being reminded that this is a natural part of the creative process, and pretty much everyone involved in artistic pursuits experiences it from time to time.  I was told about a quote from film director David Lynch, where he likened getting inspiration for films to ice fishing.  Something along the lines of… much like ice fishing where you have to wait by a hole in the ice, sometimes for a long time, for a fish to come along, inspiration for a really good idea can’t be rushed.  You need patience, and the confidence to know that eventually the really good idea will come.  Similarly with music, you might have to sit there for a day (or two, or a week) auditioning combinations of sounds before a good idea comes along which you can turn into a track.   The important thing is to have patience and persistence, and accept that you’ll probably come up with 10 average ideas before a really good one.

Another reassuring thing is that you never know when shelved or seemingly average ideas might be able to be resurrected in the future.  In my case, the aforementioned idea I shelved after a full week of work, when combined with a different bass line a couple of months later, was transformed, and went on to become the basis for a track I was quite happy with.

Using Pink Noise as a Reference when Mixing

play the mix of the track over pink noise, to give an even reference level against which to assess the level of individual elements

I said in my last post that I’d write about some additional techniques I use to balance a mix.  One of these is to play the mix of the track over pink noise, to give an even reference level against which to assess the level of individual elements, and to try and get an impression of the balance of the elements independent of any room resonances or peaks or troughs in the monitor’s frequency response.  To explain…

A while back I read an interesting article by Eddie Bazil in Sound on Sound, where he discussed using pink noise to establish basic levels for each element when beginning a mix.  This got me thinking that the same technique could be used at the end of (and periodically during) the mix process, as a kind of sanity check to make sure the levels of the main elements are evenly balanced.

Hence, when mixing the last couple of tracks I did, I used exactly this technique, and periodically played the mix over pink noise.  What you’re looking for when you do this, is to set the level of the pink noise quite high so the main elements of the track are just ‘poking out’ above the pink noise.  You want to try and make sure the amount that each is ‘poking out’ is more or less even.

The below clip is of the track Summer Wave played over white noise as described.  The bass drum, bass line, snare/clap, and hi hat all sit above the level of the pink noise by a relatively even level…

(Actually on listening to this again, if I redid the mix I probably bring the hi hat down, and snare/clap up just slightly, but this highlights an important point… you want to use this technique as a ballpark guide only, and still let creative and subjective opinion override it.)

The technique also gives you a way to check whether various elements have been compressed enough or not… e.g. if only the attack of the snare drum was audible, and the decay was lost under the pink noise, you’d probably want to look at applying a bit more compression to the snare.  Also if you’re mixing radio and similar mediums,  this technique somewhat simulates how listeners would hear the track in a very noisy environment, and again gives you a way to check that all the key elements are audible in those types of situations.

The other benefit of checking the mix this way, is that it gives you a point of reference which is less affected by room resonances, or the frequency response of your monitors.  That is… pink noise played through monitors will have the monitor frequency response, and any room resonances imparted on it… hence if you assess the levels of different elements against the pink noise rather than against other elements, it gives you a way to check the mix balance independent of any anomalies of room or speaker frequency response.  This can be difficult, as it’s natural to tend to assess the level of an element of the track against the other elements… you should instead focus on the amount each element ‘sticks out’ over the pink noise.

When used in combination with listening through multiple systems, and adjusting your listening position (if required) as discussed in my last post, this technique gives you an additional, useful way to check your mix balance.

Room Resonances

Being able to identify room resonances, and then work with and around them are key to producing balanced mixes.

Most of us working in project studios, are mixing and producing in environments which are far from acoustically perfect, and having to deal with frequency peaks and nulls in different parts of a room are an unfortunate but unavoidable reality. Being able to identify room resonances, and then work with and around them are key to producing balanced mixes.

I faced room resonance issues when mixing my most recent track.  My studio room is far from acoustically ideal, with concrete walls (although covered on 3 sides) and almost-square dimensions (apart from a corridor at the back, forming an overall ‘L’ shape).  My normal sitting position when mixing is centred in the room, and forms an equilateral triangle with the monitors (as is recommended by many tutorials, and monitor instruction manuals).  In the past this position has always sounded balanced in terms of frequency response, but with the last track, i was finding that the mix sounded more balanced when i sat about 50cm in front of my normal position… but as soon as i moved back, the low end of the bass line dropped out significantly.  The bass line centred around a D note (approx 73Hz), and after messing around with sine wave sweep tones, i found that there were significant nulls at that frequency in my normal listening position, and other places in the room.

As a test, I played at 73Hz sine wave through the monitors, and recorded clips of it at two places… one where i thought the mix had previously sounded reasonably balanced, and another where the sine wave seemed to drop off the most (both points being equidistant from the speakers).  These two clips are below (note… please make sure you’re listening on something that can play back 73Hz, or you’re not going to hear anything!)…

Null point:

Balanced point:

Despite the fact that the recordings are of exactly the same sound recorded at the same distance from the speakers, the clip recorded at the null point is roughly 6dB quieter than the clip from the other point.  I was surprised by this… 6dB is really significant, and I assume that the difference between the null point and a peak point in the room could be even as much as 12dB.  If you inadvertently did your whole mix from the null point, it would potentially end up 6dB too loud around 73Hz… that’s a big difference, and would sound noticeably unbalanced when played back on other systems.   It would have been especially problematic in my case given that the null frequency, and the fundamental of the key of the track were the same.

Identifying null and peak points is the first step , and the next question is how to work with/around it?  In my case i changed my listening position slightly, shifting about 40cm forward of the normal position.  I knew from mixing other tracks that this spot usually sounded slightly bass heavy and a little dull at the top end (as it was slightly off-angle of the monitor tweeters).   So I had to be conscious of this when mixing, and very  slightly compensate for it… mixing to be slightly more light in the bass and crisper at the top end than what i thought was an ideal balance.  I also occasionally moved back to the normal position in line with the tweeters, but just to evaluate just the high frequency content.  I also regularly checked the mix on other systems to get some additional perspective (my old monitors plus my tablet and earbuds).

In the end I achieved what I think is a nice, balanced mix through adjusting the mix position as described, and manually compensating for the deficiencies in frequency response at various positions.  This was also coupled with other techniques (which I’ll describe in detail in a future post).  It also helps enormously to ‘know’ the sound of the room you work in… to know and remember any null and peak points, and to be able to anticipate the effect they will have on different parts of a mix, and compensate and balance accordingly.  When I was only producing music in my free time, I didn’t notice the effect of room resonances as much… I think producing full time, and working in the same space regularly lets you get to know the sound of a room much more quickly, and be more conscious of any differences or anomalies.

Interestingly, I checked the wavelength of the low D in which the key was based, and found it was just over 4 metres… which was almost exactly the length of the back wall of the room… and hence probably explained the peaks and nulls at that frequency.

Cleaning Up a Mix

there’s usually not one magic fix in order to realise a fairly abstract goal like ‘make the mix clearer’

Over the last week I’ve been finalizing the mix of a new track (Summer Wave).  In terms of sound texture, it’s the ‘thickest’ track I’ve written this year, with quite a lot of instrument and percussion layers mixed together.  The thicker the texture of a track gets, the more challenging the mixing process becomes, as you’ve got more layers of sound, and more frequencies competing to be heard in a limited space.  Hence, early in the process when I started with a rough sequenced mix, one of the first things I wanted to do was clean up the mix… to remove ‘mud’ and make the individual layers more distinct and audible.  Generally I find that in removing ‘mud’ from a mix, there’s usually no one ‘silver bullet’ solution, and the improvement comes from repeated iterations of small fixes.  That was the case here, but there were 2 changes which both made a significant improvement to cleaning up the mix.

The rough mix sounded like this…

…not too bad for a first cut, but i wanted the individual elements to be clearer.  While doing some cleanup work on some of the individual layers, I soloed this ‘glass bottle’ track (so named because it came from a sample of a glass bottle being tapped on a tiled floor)…

I was surprised at how much low frequency content there was in this part… especially because i usually high pass filter the raw samples of sounds like this long before I get to the mixing stage.  The sample had a loud transient ‘thud’ sound at the start at approx 135Hz.  This sat right in the frequency range of both the bass line and the ‘meat’ of the bass drum, and given the ‘glass bottle’ sound had been included for its high frequency, bell-like rhythmic pattern, this sound down around 135 Hz was redundant, and was probably just ‘muddying’ the sound of the bass drum and bass line.  I initially applied a high pass filter at ~300Hz, but after a few more iterations of review decided I could set it at 518Hz without detracting in any way from the part of the glass bottle sound I wanted to hear.  The soloed glass bottle sounded like this with 518Hz high pass filter applied…

The full mix after this change, sounded like this….

Granted its subtle, but to me there’s a definite improvement in the ‘smoothness’ of the bass line (because the rhythmic pulsing at around 135 Hz caused by the glass bottle pattern has been removed).  And importantly, as discussed at the start of the post, it’s an important step in the iterative process of cleaning up the overall sound.  (Note – to more clearly hear the ‘smoothing’ in the final full mix, download the before and after mix clips and A/B them with a low pass filter at about 200Hz).

More towards the end of the mix process, i was reasonably happy with the overall sound of the mix on my monitors, but i felt that the synth ‘stab’ sound was not clear enough in the mix when auditioned through my tablet and earbuds.  The mix at this point sounded like this…

After soloing some of the parts, i realised that one of the background percussion parts (sourced from a sample of an aluminium coke can) had a note which played at the same time as the synth stab…

Coke can…

Synth stab…

The problem was that the fundamental of that first coke can note was at 221Hz (the A below middle C), and that same A was one of the notes in the synth stab chord.  Basically the 2 sounds were competing for the same frequency space.  Give that first note of the coke can was really just a grace note to the second higher and more prominent note, I made a 3.3dB cut at 221Hz on the coke can track, which resulted in…

And sounded like this in the context of the whole mix…

To me this made a pretty significant contribution to allowing the stab sound to sit more clearly in the mix.

Again, my experience is that there’s usually not one magic fix in order to realise a fairly abstract goal like ‘make the mix clearer’.  But through iterative and successive iterations of fixes like those above, high-level overall improvements can be achieved.

 

 

Compression Basics – Compressing live percussion

Live percussion recordings tend to have a large dynamic range, and hence are a great vehicle to use to learn the basics of compression.

Compression seems an appropriate topic for my first ‘howto’ article, given that it’s the effect that I’ve learned by far the most about over the last 6 months.

I tend to use a lot of recordings of live sounds in my tracks, particularly for percussion.  Live percussion recordings tend to have a large dynamic range, and hence are a great vehicle to use to learn the basics of compression.  Because the dynamic range is so large, it requires either a single pass of a compressor with very aggressive settings, or better, successive applications of more gentle compression and limiting (as I’ll show here).

Applying compression is often a difficult technique to learn, because the differences imparted by mild compression (e.g. with low ratio or high threshold, for example on a master bus) can be difficult to recognize unless you really know what to listen for.  However, when reducing the dynamic range more dramatically (as is usually required on live percussion samples) it’s much easier to hear the effects of the compression

The example sound I’ll use is a recording of a steel drink can knocked against a hard table (recorded using a Rode NT3 condenser microphone). I thought it was an interesting sound and wanted to save it in my sample library so I could potentially use it in a track at some point in the future.

One important point here is that you’ll need to use good headphones or monitor speakers to properly hear the difference between the audio samples below.  It will likely be difficult to hear the differences properly on laptop, tablet, or phone speakers.

The raw sample sounds like this…

and has the following waveform…

compression-basics-1

…straight away you can hear (and see) the big difference in level between the initial transient sound of hitting the table, and the ‘tail’ sound of the can ringing (starting from about 0.015 seconds).  If you tried to use this sound in a track as-is, you’d have to keep the level of it fairly low to prevent the loud transient from clipping, and then the nice, harmonic ring of the can would probably be completely lost under other sound layers.

For these types of samples, I usually firstly apply some limiting to reduce the level of the transient peaks (using Waves L1).  In these cases I often find looking at the waveform more closely helps to give you an idea of where to initially set the threshold of the limiter…

compression-basics-2

In this case, I ideally want to trim the two most prominent peaks at the level marked by the red lines.  These show a 16bit integer value of 18,000, which equates to roughly -5.2dB (note that waveform axis is marked as 16 bit, although the sample itself is 24 bit).  Auditioning L1 on the sample, I was actually able to limit down to -8.2dB threshold without adversely affecting the sound.  Also, because we’re just limiting peaks in this case which rise and fall very quickly, I’m using a very short release value.  Ultimately I used the following settings in L1…

compression-basics-3

… and it resulted in the following sound and waveform changes…

compression-basics-4

Zooming in on the waveform again, what I want to achieve is to further reduce the difference between the peaks and the ring of the sound… visually, to try and ‘pull’ the peaks more towards the red lines.  Using limiting again would be too harsh, and would probably remove the dynamic and ‘impact’ out of the sound… hence I use a compressor (Waves C1).  Again, using the red lines as a guide for the initial threshold setting, these are at 16 bit value 6000, which equates to approx -14.7dB.  I probably want to reduce the level of these peaks by about a 1/2 or a bit more above the threshold, so I would guess at a compression ratio of around 2:1 to 2.5:1

compression-basics-5

As with the limiter, I’m trying to just ‘pull down’ transient peaks here whose wavelength is very short (a handful of 44.1Khz samples), so I use short attack and release settings in C1.  From previous testing, I’ve found the quickest attack and release settings you can use in C1 without it introducing undesirable artifacts (‘clicking’ sounds as the compressor engages) are about 0.04 and 30ms respectively.

Ultimately I used a bit higher threshold than the -14.7dB estimated.  The reason for this that C1 has a fairly soft ‘knee’ (i.e. it starts introducing compression gently as the sounds approaches the threshold level).  I looked at the waveform to get a rough idea of the initial threshold and ratio settings to use, but these need to be auditioned and finalized by ear.  I settled on the below settings, which gave a nice balance of still having some dynamic and ‘impact’ but allowing the ‘ring’ part of the sound to be closer in level to the transient (it showed around 3dB of gain reduction on the meter in C1).  The final step was to add 1.9dB of makeup gain, which audibly level-matched the compressed sound with the original.

compression-basics-6

If I was using this sample immediately in a track I probably would have gone for slightly more aggressive settings (less threshold or more ratio), but given it’s to be put in my sample library, I erred towards more conservative settings to make the sound more generally useable.  The resulting sound and waveform are below…

compression-basics-7

At this point I normalized the level of the sample up to -3dB.  The final step I usually take with these kind of samples is to do one more application of L1, just to trim the highest peaks, but without changing the sound of the sample.  This is just to try and reduce the transients as much as possible, which makes the sound easier to mix into a track without master bus clipping (I’ll discuss in more detail in a later post).  Looking at the waveform again to give me a guide for the initial settings, I want to try and contain the peaks to within the red lines (16 bit value of 20,000 ~= -4dB).  I used a threshold of -4dB in L1…

compression-basics-8

compression-basics-9

…which trimmed the peaks, but without noticeably changing the sound.

With these types of percussive samples, the last step I take is usually to use a gate to fade out the tail of the sample.  The appropriate gate settings are best judged by ear, and I settled on those below, which I thought gave a nice balance between allowing some of the ‘ring’ of the can to sustain, but also fading out the hiss of the noise floor (and I used the Waves C1 Gate for this)…

compression-basics-10

If you compare the initial raw sample against the final one below, the final one has a lot more evenness between the transient and the ‘ringing’ tail of the sound… the whole of the sound can be heard more clearly, and this will make it far easier to mix into a track along with other instruments.  It has a ‘stronger’ sound than the raw sample, but peaks at a lower level.

For readers who are a bit unsure of the appropriate applications of compression and what settings to use (as I once was), I’d encourage you to try the above steps with your own live percussion samples.  For me it was a really good way to practically understand the effects of compression, and to be able to clearly hear the results.