Adjusting Effect Levels for Mix/Bus Compression

I spent a few hours yesterday doing final bus compression for the track I’m currently working on. Approaches to and techniques for bus compression were one of the things I learnt most about during 2016, and yesterday I had a kind-of ‘lightbulb’ moment, which will hopefully lead to better results in this area going forward.

I’m a ‘reluctant participant’ in the whole competitive levels/loudness wars thing. Fundamentally I like the groove, emotion, impact, etc which a decent dynamic range can impart on a track. But at the same time I understand the need to achieve an overall loudness level that’s similar to other tracks in the same genre (especially because not doing so simply makes your music difficult for DJs to mix).

In the past, I’d always equated greater amounts of bus compression to a loss in clarity. To some extent this is true, as compression will narrow the dynamic range of the sound and hence simply reduce the ‘depth’ of volume variation available. However I’d always found that compressing the entire mix necessitated a compromise of getting closer to competitive levels while sacrificing some detail and clarity.

About halfway through last year I had a mini breakthrough of sorts, when I realised certain settings on bus compressor plugins can have a big effect on the quality of the resulting audio. Specifically I usually use Cytomic’s ‘The Glue’ as the first stage in the bus compression chain, and I found that simply setting the oversampling rate to the recommended or higher levels (4x or more when auditioning) gave far clearer audio quality than the default lower settings.

For my current track I had spent a bit longer than usual honing the reverb plugin settings, and fine tuning the reverb send levels. After this I was really happy with the result… it had a nice balance of having a good depth/space with sounding too ‘washed out’, and seemed to translate well to several different sets of speakers and headphones. But yesterday it was a bit disappointing to have some of this clarity and balance lost when I started pushing the final mix through bus compression. When I listened closely it wasn’t so much a by-product of compression, but more that the levels of the reverbs and delay effects were stronger. When I thought about it, the reasoning was obvious… I’d squashed down the top 3-6 dB of the volume range, so obviously sounds down at -15 to -20dB (like the reverb layer) had been effectively pushed up by a similar amount.

I usually do final bus compression in a separate Reaper project to the mixing, using just the final stereo mixdown as a source track (my aging PC can’t handle multiple reverb plugins and CPU hungry bus compression at the same time). So I went back to the mix project and rendered another version of the stereo mix with reverbs and main delays turned down around 1.5dB. Running this new version through the same compression chain resulted in a much clearer mix… it sounded a lot more like the former original stereo mixdown… just louder (which is exactly what I was trying to achieve).

Anyway, in hindsight I’m a bit surprised it’s taken me this long to figure out this technique (the basic point of compression after all is to reduce dynamic range), but I’m going to experiment a bit more, and hopefully end up with a lot cleaner, clearer final mix than for past tracks.

Another way to potentially prevent the issue could be to ‘mix into’ a compressor or limiter during writing/sequencing/mixing. This is a bit unorthodox technique historically, but seems to have gained popularity in the last few years (I seem to have read a lot of articles recently where people discuss working this way). The idea is to put a limiter/compressor on the master bus right from the early stages of writing (using generic/default settings close to what you’d usually use for final bus compression). This way you’re always evaluating level balance with compression already ‘baked in’. I don’t usually use this technique because for some reason I like to keep a clear separation between the mixing and final ‘mastering’ stages… but based on yesterday’s experience I can definitely see the merits, so may try it in a future track.

Starting In The Middle

I read a good piece on musicradar the other day about approaches to arranging (or what I usually refer to as ‘sequencing’).  A couple of the tips in that article really resonated with me (namely 1 – ‘start in the middle’ and 5 – ‘draw it out’) because they were things that I ‘discovered’ myself during my work in 2016.  The most useful of those was the idea of ‘starting in the middle’ so that’s what I’ll discuss today.

Sequencing is one of the things that I find more difficult in the production process.  At the point of starting sequencing you’ll usually have a bunch of track elements or layers you’re happy with, and you need to get from that point to having a rough form of a track, making sure that the sequence remains interesting throughout, and showcases the element or layers as you’d intended.  This is a pretty big step, and the path to get there is ambiguous… and in fact there’s not one, but many paths that could eventuate… i.e. there’s actually an almost infinite number of possibly sequences which could turn out good.  I think because of this I used to sometimes experience hesitation at starting out (similar to what I wrote about in my procrastination post).

For some reason I always used approach sequencing in a linear/serial way… i.e. starting from the absolute first beat in the intro, and working through the sequence to the end.  But I found this was difficult and often lead to uninteresting sequences (like the first 2 minutes of the track ended up just predictably introducing a new element every 16 beats).  At some point during 2016 I decided I needed a new approach to this, and that’s when I found the same ‘start in the middle’ technique described in the article…

By the time you start sequencing, you would have likely been working on the individual track elements for a reasonable amount of time… hence you’ll have a good idea of which elements/layers sounds good together, and which combinations and build-ups of layers you want to showcase as the main theme of the track… so since that should be clear in your mind, start by sequencing that part… i.e. create the main build/peak part of the track first. You might also have other ideas for kind of ‘precursor’ builds to the main build/peak point, so put those in the sequence too. Once you have these main ‘points of interest’ in the sequence, you can more easily ‘fill in the gaps’ between the points (more easily than trying to build the sequence start to end). Most DAW platforms should be capable of inserting and deleting time measures (and preserving automation lanes etc…) if you need to extend or contract the gaps between the main points, so there shouldn’t be any technical limitations either from working this way.

I now find that the intro and outro are usually actually the last parts of the sequence that I make… and they often don’t require too much attention, given that in club music your main goal for these parts is usually not to make them interesting, but to make them easy for a DJ to mix with the next or previous track in a set.

This was a technique which I found a huge help in expediting the process of arranging/sequencing.  The ‘draw it out’ technique also mentioned in the article was another, so I’ll write about that (and maybe include a real sequence drawing I used for a track) in a future post.

Breaking Musical Rules 2

This is the second in set of examples of musical structures which sound good despite being well outside of the rules of traditional Western music theory, and revolves around pad sounds.

Pad sounds usually exist (as the name suggests!) to ‘pad-out’ an arrangement, and give it some additional texture and depth.  As they’re usually designed to sit behind the main instruments/elements of a track, you can often get away with more abstract textures, created by more complex chords.  I can still clearly remember my eureka moment many years ago, when I discovered that really nice pad sounds could be made with a low-pass filtered synth patch, played by a thick, jazzy chord (9th, 11th, etc…).

During 2016 I experimented quite a lot with different ways of making pad sounds, and discovered that you don’t have to limit textures to complex jazz chords… you can use all kinds of diatonic structures and ‘chords’ which are way outside of the bounds of traditional music theory.

The example I’ll use is part of the pad sound I used in ‘Push On‘.  I used a couple of different instrument layers to arrive at the final sound, but one of those layers used a preset sound from Spectrasonics Atmosphere.  The soloed layer sounds like this…

… and was played using the following ‘chord’…

breaking-musical-rules-2-1

That’s basically the first 4 intervals of a C major scale played together in consecutive octaves.  It’s also miles away from anything that you’d learn from traditional Western music theory (it can actually be ‘played’ by two very comfortably spaced fists on the keyboard!).  This is the kind of chord I would never expect to fit into anything but the most avante-garde of music styles (due to preconceived ideas of what harmonies will work), and hence would be very unlikely to try or experiment with when putting a track together.  But I discovered last year that you can often use these types of complex and unconventional chords for pads (I used similar and often more complex chords in other tracks I produced in 2016 aswell).

Part of what makes it possible is the use of low-pass filtering in pad instruments.  If you were to play the same chord on a loud piano or with an orchestral string patch, the mash of upper harmonics it would produce would sound quite messy and dissonant (like playing the piano with your fist!).  But as this patch has a lot of those upper harmonics rolled off, it allows more complex (and traditionally dissonant) sets of intervals to work better together.

Just as a reference, a single note played on the same Atmosphere patch sounds like this (with no high-pass filter and hence more low end)…

When creating pad sounds, it’s worth messing around with complex and unconventional chords and intervals.  It often allows you to create much more texturally rich and deep sounds than you could achieve with more traditional chords, but still maintaining consonance in the overall result.

Noise Reducing Percussion Samples

A quick tech ‘how to’ post today… around noise reduction in live-recorded samples.  As I’ve mentioned previously, I use a lot of live recorded sounds in my tracks, especially live recorded percussive sounds.  Sometimes these sounds can be recorded quietly in the studio, but other times I capture them ‘on location’, and hence have to work with background noise.  On other occasions the background noise is unavoidably entwined with the sound source.  This was the case today when I recorded some tom sounds from the Volca Beats speaker.  The direct sound from this Volca is fairly useable , but the speaker is really hissy, and hence I ended up with a lot of hiss in the sample…

The raw sample

In the past I used to try and remove this in one of two ways…

  1. Use a high shelf filter to reduce the hiss
  2. Use a gate to fade out the tail of the sample

…but neither of these were ideal… the filter option would remove high frequency content from the whole sample including the attack part (which can significantly alter the sound).  The gate avoids that problem, but requires that you find a trade off between cutting the low frequency content of the tail (using a short release time), and ending up with audible hiss remaining in the tail (using a longer release time).

But, using automation you can combine the above approaches and get a much better result than either in isolation.  The trick is to use a high shelf filter, but automate the gain/level control, so that it’s very quickly attenuated just after the attack of the sound is finished.  The screens below demonstrate the setup in Reaper.  First you import the sample into an empty track.  Then add a high shelf filter into the FX chain (I’m using Reaper’s built-in ‘ReaEQ’ below to keep things simple).  Then automate the gain/level control of the filter (using the ‘Track Envelopes/Automation’ button on the track control)…

noise-reducing-percussion-samples-1
Reaper track ‘Envelopes/Automation’ window

Then draw an automation curve as shown in the below screen…

noise-reducing-percussion-samples-2
Automation curve
noise-reducing-percussion-samples-3
ReaEQ settings (‘Gain’ is controlled by the automation)

Depending on the nature of the sample, you’ll want to try adjusting the 4 highlighted parameters above to get the noise-reduced version sounding right…

  • The point where the filter starts to drop
  • The time the filter takes to get to minimum gain, and the shape of the curve (above option is using the Reaper ‘fast start’ point shape)
  • The frequency and bandwidth/Q of the filter

If it’s an excessively noisy sample, a low pass filter might also work better than high shelf.

In this case, the same sample with the above settings turned out like this…

The ‘noise-reduced’ sample

… that’s a considerable amount of noise reduction, but has maintained all the attack and general timbre of the sound.

The ‘Rule of 3s’ for Incidentals

A month or so ago, I read an article on musicradar article entitled ‘Robin Schulz’s top 10 tips for producers‘.  I hadn’t heard of Robin until that point, but I really resonated with the advice that he was giving… the stuff he covered was generally also the stuff I tend to think of as key techniques to producing successfully.  I checked out one of his tracks on YouTube too… ‘Prayer In C‘.  The track has an incidental build almost right at the start (0:05)… but surprising for commercial music, the texture of this build is quite thin and sparse… consisting of mainly just a lone white noise sweep, and tends to come in a bit predictably.  It’s similar to the kind of use of this sound that you find in more amateur, ‘bedroom’-type productions.  I’m not at all trying to be critical of the production (I wouldn’t have a leg to stand on as obviously Robin is at least 1000x more famous than me!)… but it’s interesting to observe, because it does stand-out somewhat from most highly polished and produced tracks from big names.

It got me thinking about things that distinguish ‘bedroom’ sounding productions from those from big names on big labels… and one of the major differences from my perspective is the use and depth of incidental sounds.  My general impression is that highly ‘professional’ sounding tracks tend to have multiple layers of complexly woven and sculpted incidental sounds… the kind of thing that adds a subtle but critical sheen of additional depth and detail to a track.  A really good example of this that comes to mind is Sasha’s ‘Invol2ver’ album.  The interesting thing about these types of incidentals is that you don’t usually explicitly hear them when listening to a track… but if they’re removed, suddenly something major is missing and the track sounds much less polished and professional.

Along these lines, for all the tracks I worked on during 2016, I adopted an approach with incidental sounds which I’ve since come to refer to as ‘the rule of 3s’.  That is… for any major build or transition point in a track, I try to have at least 3 separate layers of incidental sounds happening at the one time.  The reason for this… having just 1 or 2 layers of incidentals at such points seems to end up being too obvious… the listener can distinguish each of the layers and the build becomes somewhat predictable.  But for me, 3 layers is the sweet spot where suddenly the layers of incidental, along with whatever instrument sounds are in the track itself, combine to make it difficult for the listener to be conscious of all the layers individually… the sound becomes harder to predict and hence more interesting.

So based on this thinking, I try to make sure I use at least 3 layers of incidental sound at any major build or transition in a track.  You have to temper that according to the style aswell… progressive-type tracks tend to do well with more layers of incidental than harder, more minimal styles… but I think 3 layers is a good baseline to follow.  As a typical default, I would have those 3 layers consist of…

  • A longer white noise swell-type sound
  • A shorter swell (e.g. reversed cymbal)
  • Some type of percussion, possibly through a delay effect

…and make sure that each layer has individual panning to control the side to side depth, aswell as EQ automation to control the front to back.

As an example, the clip below contains the soloed incidental parts from the build starting at 2:43 in Summer Wave

This actually contains about 5 layers on the build/swell side (3 swell-type sounds plus 2 percussive), and 2 or 3 crash cymbal-type sounds layered together… that’s leaning towards being a bit excessive, but also gives the track a lot of depth, and that more ‘professional’ sound I mentioned earlier (and given the more progressive style it lends itself to greater depth of incidental sounds).

If you’re a producer, striving to make your tracks sound more professional or polished, I’d highly recommend you look at your use of incidental sounds… and if you’re only using a couple of layers consider thickening the texture and apply the ‘rule of 3s’.

 

(Disclaimer: A acknowledge that these days the term ‘bedroom productions’ has no correlation with being amateur or unprofessional… as many famous commercial productions are indeed conceived and realized in a bedroom!)

Pitching Percussion

As I’ve written about many times in this blog, I learnt a stack of stuff during 2016 regarding production techniques.  One of those that really stood out for me, was the importance of correctly pitching percussive sounds.

For whatever reason, prior to 2016, I can’t consciously remember explicitly re-pitching sounds like hi-hats, snares, claps, etc… which says to me that I either didn’t do it that much, or didn’t see it as being particularly important.  I guess from a theoretical point of view, my thinking was along the lines of “they’re not tonal, harmonic sounds, so there’s no point or need to tune them”.  Ofcourse, now understanding what constitutes a sound much better, percussive sounds are no different to what we consider ‘instrument’ sounds… at the end of the day they both break down to a collection of sine waves… it’s just the sine waves in a percussive sound modulate much more quickly, and are often at more ‘dissonant’ intervals than those in instrument sounds.  For lower pitched, more ‘droney’ instruments like bass drums and toms, it’s pretty obvious that changing the pitch can have a significant effect (since the tail of these sounds is usually dominated by a single sine-ish tone), but I was suprised how much of an effect this can have on snares and cymbals.

Rather than bang on too much about sound theory, maybe it’s best to illustrate with an example.  Below is a raw clip of some of the percussion elements, and the bassline of my track ‘Dystopia‘…

… and here is the same clip again, but with the pitch of all percussion set to the default (i.e. as it was in the original samples)…

Hear the difference?  The pitched version sounds much more cohesive, and has a better groove… mostly due to the hi-hat and clap/snare sounds being more in tune with the bass line and bass drum.  Also there’s a high pitched  ‘woody’ sound played on 16ths (sample of a chopstick being dropped onto a pile of chopsticks)… it tends to stand out and sound incongruous in the unpitched version.  Overall, the unpitched version just seems to ‘lag’ somewhat, and definitely doesn’t have the same integrity and groove of the pitched version.

The actual differences in pitch are usually quite slight.  I use Kontakt for all of this percussion, and these samples would be shifted by at most 2 semitones.  But I find even a change of 0.3 or 0.4 of a semitone, on a key percussion element like hi-hat or snare, can have a profound effect on the overall sound and groove of a track.

It’s important to keep this in mind too, when auditioning percussion samples.  I tend to cycle through sometimes 100’s of snare and cymbal sounds, and when doing this you have to keep in mind that a particular sound might sound slightly off or wrong when directly auditioned, but could be transformed after a slight pitch shift.

Anyway, the importance of pitching percussion been a kind of interesting revelation for me, and in terms of potential effort vs benefit (i.e. it requires little effort to change for potentially a lot of benefit), is a technique you should definitely utilize.

Mix Issues – Prevention Rather than Cure

Thanks to the rapid development of DAWs and plug-ins over the last 5-10 years, as producers we have close to unlimited flexibility in terms of audio processing.  Even my very old (7+ years) music PC is capable of running 10’s to 100’s of simultaneous plugins in a track’s project.  Added to this, the internal digital routing in a DAW, and the ever-increasing quality of plugins, means chains of 10’s of plugins are not only a reality but often the norm in putting together a track.

But with this flexibility can come a complacence to ‘fix problems later’ with plugins, rather than dealing with them at the source.  I’ve read numerous interviews with pro producers who emphasise the importance of getting sound right in the first instance… particularly with things like tracking… finding good sound through good mic selection and placement, rather than fixing it with EQ in the mix.  Yet, it can be easy to forget or ignore this advice given how simple it is to throw extra plugins in an effect chain.

While writing ‘Dystopia‘, I came into this kind of situation… a problem which could have been fixed by additional tweaking, or extra layers of compression… but which actually had a simple, and probably ultimately better sounding solution at the source.

The track has the following background percussion pattern in various sections…

Within an 8 beat phrase, the first percussion ‘hit’ occurs on the third 16th beat, and has a quieter and lower pitched ‘grace note’ a 16th before that.  The below screen shot shows the MIDI sequence for the pattern, with the grace notes highlighted…

mix-issues-prevention-rather-than-cure-1

At the point of final mixdown and applying bus compression, I noticed that there were occasional waveform spikes at the points of these grace notes… the highlighted peaks on the below waveform show an example…

mix-issues-prevention-rather-than-cure-2

These spikes were not only quite strong (almost hitting 0dB), but occurred on a rhythmically odd (syncopated) beat of the bar… i.e. the second 8th beat of the bar… at the same point as the offbeat hi-hat sound.  When I was trying to apply compression, the strength and syncopation of these spikes were causing the same type of uneven, pumping compression I mentioned in my second bus compression article.  The problem could have been cured at the final mix stage by potentially applying a limiter or a fast acting compressor at the start of the effect chain.  But instead, I went back to the MIDI sequencing and took at look at the part itself.  Considering the note at the second 8th beat was just a grace note, and that it occurred on the same beat as a rhythmically far more important part (i.e. the offbeat hi-hat), the MIDI velocity of that note seemed quite high (at around 81).  Hence, I tried simply reducing the velocity of the grace note to about 70 as per the below screen shot…

mix-issues-prevention-rather-than-cure-3

…and this simple change benefited the mix in 3 ways…

  • It left more room for the offbeat hi-hat, and hence made the hi-hat clearer.
  • It wasn’t in any way detrimental to the in-context sound of the percussion part (actually, I think it sounded better after the change).
  • It had the effect of removing those waveform peaks, and hence let the compressor work more smoothly and musically (see the ‘after’ waveform below)…

mix-issues-prevention-rather-than-cure-4

Ultimately, a simple changing of MIDI velocity fixed the problem, and was far easier to implement than extra layers of limiting and compression would have been (and also avoided the additional side-effects that limiting and compression could have introduced).

Clips of the ‘before’ and ‘after’ full mix are below…

Before

After

The interesting take-home from this experience, was to always think a bit ‘out of the box’ with regard to mix problems… to consider whether there’s a simple preventative measure that could avoid or correct the problem in the first instance.  In 99% of cases, as the pro producers advise, such a prevention is probably going to be easier and more effective than the equivalent cure.