Bus Compression 1

A loud, but musical sounding track is achieved through a layered approach to compression

Advertisements

As a music producer in the current day, it’s almost impossible to create a track end to end without having at least a consciousness of the pressure to produce ‘loud’ mixes.  Working on music full-time during this year has given me time to learn techniques to create appropriately ‘loud’ mixes, and also to compare the loudness of current day tracks against stuff from when I first started writing electronic music in the mid 90s.  When I started to make these kind of comparisons I was quite shocked at just how much louder music has become.  Granted 2 things have changed in that time that have been a catalyst to the general increase in loudness…

  1. Music technology has improved a lot… not only have professional quality tools become accessible to producers at all levels, but there is now a whole market segment dedicated to tools specifically to deal with loudness (mastering plug-ins etc…)
  2. Electronic music has moved much more into the mainstream, and is being produced by artists with big labels, money, and access to first rate facilities and mastering engineers behind them (as such production quality generally has improved a lot in this time)

Still, it’s not unusual for me to find an average increase of 6-7dB in tracks I buy today compared to stuff from 15-20 years ago.

Generally speaking I’m a fan of having some dynamic in music.  I’ll admit I’m shocked when I sometimes hear recent commercial releases with blatant, harsh clipping distortion in them… even coming from huge artists and labels.  But at the same time I recognize the need (especially in electronic dance music) to have your track’s levels on par with other tracks of similar styles.  As an amateur DJ, I know what a pain in the arse it can be trying to mix in a track which is 3-4dB lower than the one you’re currently playing, especially if you’re already close to the limit of the levels on your mixer.  And as a producer, the last thing you want is for DJs to pass on your great track simply because it’s not loud enough.

With my own music currently I tend to err a bit on the cautious side with levels… I usually prefer to back off the bus limiter level by a dB or two, and keep a little of the dynamic.  But at the same time I often question myself as to whether this decision means a prospective DJ or label might pass on my track, simply because it doesn’t stand out (in terms of level) as much as others.

Years ago I used to naively think that competitive levels could be achieved by simply slapping Waves L1 (or L3, or similar) onto the master bus, and that you’d automatically achieve a strong clean sound.  During this year, I’ve realized it’s not nearly that simple (or maybe it is, but I’m just not using the right plug-in 🙂  ).  One  of the things I’ve come to understand the most is that getting competitive levels in electronic music requires a conscious attention to levels and compression throughout the whole compositional process.  I think I saw this idea best summed up whilst recently reading an article by the Chicago Mastering Service… referring to a well-mastered commercial example, they said…

“a very loud but musical sounding master was achieved through a layered approach to compression that probably began during tracking, continued through the mixing, and was finished off in mastering”

There’s a reason why the channel strip of analogue mixing consoles usually included a compression section… if you want your mix to sound clear and competitively loud, and respond well to master bus compression and limiting, you often need to apply some amount of compression individually to the main elements in your track… and this needs to start right from the early stages of sequencing and mixing.

Again, this is one of the things that trial and error has taught me a lot about this year… this post is just an introduction to the topic,  but I will follow up with some real-world examples of what I think are workable techniques for achieving competitive loudness.

EQ Automation on Incidental Effects

using EQ automation to fade incidentals in and out goes a long way to achieving a natural sound

Creating more realistic incidental effects is one of the most important things I’ve learnt over this period of writing music full time… and when I say ‘incidental effects’ I’m referring to background effects which enhance depth, or create additional emotion or tension in music.  In electronic dance music these are also often referred to as swells, drops, sweeps etc… and at a fundamental level can be implemented using faded-in white noise (for a swell) and a crash cymbal (for a drop).

These kind of sounds are interesting because the listener doesn’t usually explicitly notice them, but will notice the absence of them, or notice them in a bad way if they’re inappropriately used, or sound unnatural.   Having appropriate, smooth, and natural sounding incidentals is a key factor to making your music sound like it’s professionally produced rather than sounding like it was produced in a bedroom.

At a basic level you’ll usually want your incidental sounds to sound relatively natural, and using EQ automation to fade incidentals in and out goes a long way to achieving this.  In the physical world, sounds that are further away from us are perceived as having a rolloff of low and particularly high frequencies, as compared to the same sound emanating from a closer position.  If we take the aforementioned white noise swell and crash drop as an example… a basic sequencing of this would fade in the white noise (using volume automation), and let the crash hit and fade out naturally at the peak point.  The following clip gives an example of this over a rough loop idea…

This sounds OK, but also somewhat obvious… i.e. the listener will be subconsciously aware of the white noise right from the point where it starts.  Because these types of effects have been used so much, and for so long in electronic dance styles, used as above, you run the risk of it sounding predictable and uninteresting to the listener (something along the lines of ‘ah, there’s the white noise, so I guess a peak point is coming!’).  By using an automated low pass or high shelving filter along with volume to fade the white noise in, it sounds more like the natural physical world, and also kind of ‘sneaks up’ on the listener… i.e. comes in much less obviously and hence the listener doesn’t overtly notice the sound so much, but is still drawn into the effect.  Using an upward swept high shelf filter on the white noise (plus a little downward sweep on the crash) sounds like this. ..

To me it’s a subtle, but also significant difference difference, and a first step towards getting more realistic incidentals, and an overall more professional sound.  Increased realism could then be achieved with panning, and additionally automating your reverb sends,  to have more reverb when the sound is ‘further away’.

As mentioned I’ve picked up many small techniques for improving incidentals during this year, so this will be the first of several posts on the subject.

Paralyzed by Choice

Imposing artificial limits to spark creativity

As mentioned in my last post, I’m currently in the middle of coming up with ideas for a couple of new tracks.  If I’m trying create a melodic or percussive pattern, there are an infinite number of combinations of properties of sound which could make up that pattern… i.e. varying rhythm, length, envelope, pitch, density, etc… and this is without considering the sound’s timbre.  To that point, when producing electronic music ‘in the box’, ‘where to start’ can be hard to decide.  I think the number of instrument plug-ins I own is very conservative compared to friends who are producing, and other artists I read about in magazines.  Yet, if I want to start making a lead or bass sound, I’ve got over 10 virtual synths to choose from, and that doesn’t include my lone hardware synth, nor the ones that came with Reaper.  It’s easy to be so overwhelmed with choice, you don’t even know where to start.

This problem is not new to electronic music.. it’s something that musicians and artists have faced ever since there have been musicians and artists.  For lyric and song writers, one remedy for this situation is the the ‘cut up technique‘… apparently used by numerous famous musicians including David Bowie and Kurt Cobain.  When a song writer can’t find a starting point, they take a newspaper or similar, cut out a bunch of random words, mix them up, and write a song using only those words.  Imposing an artificial limitation, and then forcing yourself to work within that limitation is a proven way to ignite inspiration.

Over the last 6 months, I’ve found that equivalent techniques of imposing some kind of artificial limit on your choices can really help to get things moving when you’re  stuck for ideas. For example in the  aforementioned case of trying to come up with a bass or lead line,  I’ll pick just one instrument, and resolve myself to making the part using just that instrument.

Similarly, if I’m looking for a percussive sound… say a hihat sample… I’ve got at least 6 or 7 sample packs which contain decent hi hat sounds… to audition all of them could potentially mean cycling through close to 1000 samples. What I’ll often do is restrict myself to one sample pack, and decide that ‘I have to find a decent sound within just this  pack’.

In current music production it’s very easy to get paralyzed by an over abundance of choice.  Sometimes artificially limiting this choice can be a good antidote.

Waiting for Inspiration

You need patience, and the confidence to know that eventually the really good idea will come

Have just completed one track, and now working to come up with ideas for the next one, I’ve shifted from the more methodical, detailed, and predictable discipline of mixing, to the far more creative and abstract process of writing.  Having discipline and persistence with the writing part (especially when it feels like no ideas are coming) has been one of the more difficult aspects of music production I’ve had to adjust to during this year.  I think a big reason for this is that it’s quite far removed from my usual work as a software engineer.  With software engineering generally speaking, getting results is simply a product of time… unless you’re working in really cutting edge technologies or research, if you put in a full day’s work you can expect to get a proportionate amount achieved (and potentially the general feeling of satisfaction stemming from that).  Hence it was a very different experience for me back in the early months of this year, the first time I committed to a whole day of writing and came out with absolutely nothing at the end!  Then, moreso when I spent the good part of a week going a fair way down the road of putting together a track, only to end up deciding it wasn’t going anywhere, and shelving it.  This necessitated a big adjustment to the approach to, and expectations of work, and for me was one of the toughest parts of starting to write music full time… it required a lot of persistence to overcome the disappointment of spending time on something, and feeling like I wasn’t achieving.

One thing that helped was being reminded that this is a natural part of the creative process, and pretty much everyone involved in artistic pursuits experiences it from time to time.  I was told about a quote from film director David Lynch, where he likened getting inspiration for films to ice fishing.  Something along the lines of… much like ice fishing where you have to wait by a hole in the ice, sometimes for a long time, for a fish to come along, inspiration for a really good idea can’t be rushed.  You need patience, and the confidence to know that eventually the really good idea will come.  Similarly with music, you might have to sit there for a day (or two, or a week) auditioning combinations of sounds before a good idea comes along which you can turn into a track.   The important thing is to have patience and persistence, and accept that you’ll probably come up with 10 average ideas before a really good one.

Another reassuring thing is that you never know when shelved or seemingly average ideas might be able to be resurrected in the future.  In my case, the aforementioned idea I shelved after a full week of work, when combined with a different bass line a couple of months later, was transformed, and went on to become the basis for a track I was quite happy with.

Using Pink Noise as a Reference when Mixing

play the mix of the track over pink noise, to give an even reference level against which to assess the level of individual elements

I said in my last post that I’d write about some additional techniques I use to balance a mix.  One of these is to play the mix of the track over pink noise, to give an even reference level against which to assess the level of individual elements, and to try and get an impression of the balance of the elements independent of any room resonances or peaks or troughs in the monitor’s frequency response.  To explain…

A while back I read an interesting article by Eddie Bazil in Sound on Sound, where he discussed using pink noise to establish basic levels for each element when beginning a mix.  This got me thinking that the same technique could be used at the end of (and periodically during) the mix process, as a kind of sanity check to make sure the levels of the main elements are evenly balanced.

Hence, when mixing the last couple of tracks I did, I used exactly this technique, and periodically played the mix over pink noise.  What you’re looking for when you do this, is to set the level of the pink noise quite high so the main elements of the track are just ‘poking out’ above the pink noise.  You want to try and make sure the amount that each is ‘poking out’ is more or less even.

The below clip is of the track Summer Wave played over white noise as described.  The bass drum, bass line, snare/clap, and hi hat all sit above the level of the pink noise by a relatively even level…

(Actually on listening to this again, if I redid the mix I probably bring the hi hat down, and snare/clap up just slightly, but this highlights an important point… you want to use this technique as a ballpark guide only, and still let creative and subjective opinion override it.)

The technique also gives you a way to check whether various elements have been compressed enough or not… e.g. if only the attack of the snare drum was audible, and the decay was lost under the pink noise, you’d probably want to look at applying a bit more compression to the snare.  Also if you’re mixing radio and similar mediums,  this technique somewhat simulates how listeners would hear the track in a very noisy environment, and again gives you a way to check that all the key elements are audible in those types of situations.

The other benefit of checking the mix this way, is that it gives you a point of reference which is less affected by room resonances, or the frequency response of your monitors.  That is… pink noise played through monitors will have the monitor frequency response, and any room resonances imparted on it… hence if you assess the levels of different elements against the pink noise rather than against other elements, it gives you a way to check the mix balance independent of any anomalies of room or speaker frequency response.  This can be difficult, as it’s natural to tend to assess the level of an element of the track against the other elements… you should instead focus on the amount each element ‘sticks out’ over the pink noise.

When used in combination with listening through multiple systems, and adjusting your listening position (if required) as discussed in my last post, this technique gives you an additional, useful way to check your mix balance.

Room Resonances

Being able to identify room resonances, and then work with and around them are key to producing balanced mixes.

Most of us working in project studios, are mixing and producing in environments which are far from acoustically perfect, and having to deal with frequency peaks and nulls in different parts of a room are an unfortunate but unavoidable reality. Being able to identify room resonances, and then work with and around them are key to producing balanced mixes.

I faced room resonance issues when mixing my most recent track.  My studio room is far from acoustically ideal, with concrete walls (although covered on 3 sides) and almost-square dimensions (apart from a corridor at the back, forming an overall ‘L’ shape).  My normal sitting position when mixing is centred in the room, and forms an equilateral triangle with the monitors (as is recommended by many tutorials, and monitor instruction manuals).  In the past this position has always sounded balanced in terms of frequency response, but with the last track, i was finding that the mix sounded more balanced when i sat about 50cm in front of my normal position… but as soon as i moved back, the low end of the bass line dropped out significantly.  The bass line centred around a D note (approx 73Hz), and after messing around with sine wave sweep tones, i found that there were significant nulls at that frequency in my normal listening position, and other places in the room.

As a test, I played at 73Hz sine wave through the monitors, and recorded clips of it at two places… one where i thought the mix had previously sounded reasonably balanced, and another where the sine wave seemed to drop off the most (both points being equidistant from the speakers).  These two clips are below (note… please make sure you’re listening on something that can play back 73Hz, or you’re not going to hear anything!)…

Null point:

Balanced point:

Despite the fact that the recordings are of exactly the same sound recorded at the same distance from the speakers, the clip recorded at the null point is roughly 6dB quieter than the clip from the other point.  I was surprised by this… 6dB is really significant, and I assume that the difference between the null point and a peak point in the room could be even as much as 12dB.  If you inadvertently did your whole mix from the null point, it would potentially end up 6dB too loud around 73Hz… that’s a big difference, and would sound noticeably unbalanced when played back on other systems.   It would have been especially problematic in my case given that the null frequency, and the fundamental of the key of the track were the same.

Identifying null and peak points is the first step , and the next question is how to work with/around it?  In my case i changed my listening position slightly, shifting about 40cm forward of the normal position.  I knew from mixing other tracks that this spot usually sounded slightly bass heavy and a little dull at the top end (as it was slightly off-angle of the monitor tweeters).   So I had to be conscious of this when mixing, and very  slightly compensate for it… mixing to be slightly more light in the bass and crisper at the top end than what i thought was an ideal balance.  I also occasionally moved back to the normal position in line with the tweeters, but just to evaluate just the high frequency content.  I also regularly checked the mix on other systems to get some additional perspective (my old monitors plus my tablet and earbuds).

In the end I achieved what I think is a nice, balanced mix through adjusting the mix position as described, and manually compensating for the deficiencies in frequency response at various positions.  This was also coupled with other techniques (which I’ll describe in detail in a future post).  It also helps enormously to ‘know’ the sound of the room you work in… to know and remember any null and peak points, and to be able to anticipate the effect they will have on different parts of a mix, and compensate and balance accordingly.  When I was only producing music in my free time, I didn’t notice the effect of room resonances as much… I think producing full time, and working in the same space regularly lets you get to know the sound of a room much more quickly, and be more conscious of any differences or anomalies.

Interestingly, I checked the wavelength of the low D in which the key was based, and found it was just over 4 metres… which was almost exactly the length of the back wall of the room… and hence probably explained the peaks and nulls at that frequency.

Cleaning Up a Mix

there’s usually not one magic fix in order to realise a fairly abstract goal like ‘make the mix clearer’

Over the last week I’ve been finalizing the mix of a new track (Summer Wave).  In terms of sound texture, it’s the ‘thickest’ track I’ve written this year, with quite a lot of instrument and percussion layers mixed together.  The thicker the texture of a track gets, the more challenging the mixing process becomes, as you’ve got more layers of sound, and more frequencies competing to be heard in a limited space.  Hence, early in the process when I started with a rough sequenced mix, one of the first things I wanted to do was clean up the mix… to remove ‘mud’ and make the individual layers more distinct and audible.  Generally I find that in removing ‘mud’ from a mix, there’s usually no one ‘silver bullet’ solution, and the improvement comes from repeated iterations of small fixes.  That was the case here, but there were 2 changes which both made a significant improvement to cleaning up the mix.

The rough mix sounded like this…

…not too bad for a first cut, but i wanted the individual elements to be clearer.  While doing some cleanup work on some of the individual layers, I soloed this ‘glass bottle’ track (so named because it came from a sample of a glass bottle being tapped on a tiled floor)…

I was surprised at how much low frequency content there was in this part… especially because i usually high pass filter the raw samples of sounds like this long before I get to the mixing stage.  The sample had a loud transient ‘thud’ sound at the start at approx 135Hz.  This sat right in the frequency range of both the bass line and the ‘meat’ of the bass drum, and given the ‘glass bottle’ sound had been included for its high frequency, bell-like rhythmic pattern, this sound down around 135 Hz was redundant, and was probably just ‘muddying’ the sound of the bass drum and bass line.  I initially applied a high pass filter at ~300Hz, but after a few more iterations of review decided I could set it at 518Hz without detracting in any way from the part of the glass bottle sound I wanted to hear.  The soloed glass bottle sounded like this with 518Hz high pass filter applied…

The full mix after this change, sounded like this….

Granted its subtle, but to me there’s a definite improvement in the ‘smoothness’ of the bass line (because the rhythmic pulsing at around 135 Hz caused by the glass bottle pattern has been removed).  And importantly, as discussed at the start of the post, it’s an important step in the iterative process of cleaning up the overall sound.  (Note – to more clearly hear the ‘smoothing’ in the final full mix, download the before and after mix clips and A/B them with a low pass filter at about 200Hz).

More towards the end of the mix process, i was reasonably happy with the overall sound of the mix on my monitors, but i felt that the synth ‘stab’ sound was not clear enough in the mix when auditioned through my tablet and earbuds.  The mix at this point sounded like this…

After soloing some of the parts, i realised that one of the background percussion parts (sourced from a sample of an aluminium coke can) had a note which played at the same time as the synth stab…

Coke can…

Synth stab…

The problem was that the fundamental of that first coke can note was at 221Hz (the A below middle C), and that same A was one of the notes in the synth stab chord.  Basically the 2 sounds were competing for the same frequency space.  Give that first note of the coke can was really just a grace note to the second higher and more prominent note, I made a 3.3dB cut at 221Hz on the coke can track, which resulted in…

And sounded like this in the context of the whole mix…

To me this made a pretty significant contribution to allowing the stab sound to sit more clearly in the mix.

Again, my experience is that there’s usually not one magic fix in order to realise a fairly abstract goal like ‘make the mix clearer’.  But through iterative and successive iterations of fixes like those above, high-level overall improvements can be achieved.