Adjusting Effect Levels for Mix/Bus Compression

I spent a few hours yesterday doing final bus compression for the track I’m currently working on. Approaches to and techniques for bus compression were one of the things I learnt most about during 2016, and yesterday I had a kind-of ‘lightbulb’ moment, which will hopefully lead to better results in this area going forward.

I’m a ‘reluctant participant’ in the whole competitive levels/loudness wars thing. Fundamentally I like the groove, emotion, impact, etc which a decent dynamic range can impart on a track. But at the same time I understand the need to achieve an overall loudness level that’s similar to other tracks in the same genre (especially because not doing so simply makes your music difficult for DJs to mix).

In the past, I’d always equated greater amounts of bus compression to a loss in clarity. To some extent this is true, as compression will narrow the dynamic range of the sound and hence simply reduce the ‘depth’ of volume variation available. However I’d always found that compressing the entire mix necessitated a compromise of getting closer to competitive levels while sacrificing some detail and clarity.

About halfway through last year I had a mini breakthrough of sorts, when I realised certain settings on bus compressor plugins can have a big effect on the quality of the resulting audio. Specifically I usually use Cytomic’s ‘The Glue’ as the first stage in the bus compression chain, and I found that simply setting the oversampling rate to the recommended or higher levels (4x or more when auditioning) gave far clearer audio quality than the default lower settings.

For my current track I had spent a bit longer than usual honing the reverb plugin settings, and fine tuning the reverb send levels. After this I was really happy with the result… it had a nice balance of having a good depth/space with sounding too ‘washed out’, and seemed to translate well to several different sets of speakers and headphones. But yesterday it was a bit disappointing to have some of this clarity and balance lost when I started pushing the final mix through bus compression. When I listened closely it wasn’t so much a by-product of compression, but more that the levels of the reverbs and delay effects were stronger. When I thought about it, the reasoning was obvious… I’d squashed down the top 3-6 dB of the volume range, so obviously sounds down at -15 to -20dB (like the reverb layer) had been effectively pushed up by a similar amount.

I usually do final bus compression in a separate Reaper project to the mixing, using just the final stereo mixdown as a source track (my aging PC can’t handle multiple reverb plugins and CPU hungry bus compression at the same time). So I went back to the mix project and rendered another version of the stereo mix with reverbs and main delays turned down around 1.5dB. Running this new version through the same compression chain resulted in a much clearer mix… it sounded a lot more like the former original stereo mixdown… just louder (which is exactly what I was trying to achieve).

Anyway, in hindsight I’m a bit surprised it’s taken me this long to figure out this technique (the basic point of compression after all is to reduce dynamic range), but I’m going to experiment a bit more, and hopefully end up with a lot cleaner, clearer final mix than for past tracks.

Another way to potentially prevent the issue could be to ‘mix into’ a compressor or limiter during writing/sequencing/mixing. This is a bit unorthodox technique historically, but seems to have gained popularity in the last few years (I seem to have read a lot of articles recently where people discuss working this way). The idea is to put a limiter/compressor on the master bus right from the early stages of writing (using generic/default settings close to what you’d usually use for final bus compression). This way you’re always evaluating level balance with compression already ‘baked in’. I don’t usually use this technique because for some reason I like to keep a clear separation between the mixing and final ‘mastering’ stages… but based on yesterday’s experience I can definitely see the merits, so may try it in a future track.

When the Problem is Staring You in the Face

I had an interesting experience over the last couple of weeks, with a mixing problem that should have have been obvious and easy to fix, but because I was too focused on details, I missed the bigger picture and let the problem persist for way longer than it should have.

I’m still in the finishing off stage of a track which has ended up becoming the most drawn out and time consuming piece I’ve worked on so far. I just looked back to previous posts and realised I said I was on the ‘home straight’ with it more than 2 months ago.

Part of the reason this track took longer than others was that it was the first where I’d used an acoustic instrument for one of the main themes… an acoustic piano riff (from NI’s ‘New York Grand’). As with acoustic percussion samples I’ve discussed in a previous post, any recorded acoustic instrument is inherently going to have a much greater dynamic range than synthetic sound. And to fit this into the generally very narrow dynamic of club music, considerable but careful application of compression is required.

The piano riff I came up with, I thought, had a nice dynamic… getting thicker in texture and a bit louder/stronger towards the end of the riff… I felt this gave it a bit greater feeling of tension. Although a fair amount of compression would be required to make the riff fit well in the mix, I was keen to try and preserve as much of that dynamic as possible. Hence when mixing I was too focused on trying to preserve dynamic of the riff that I’d liked in the soloed part. This unfortunately led me to being too cautious in applying compression, and ended up pushing the piano part way too high in the mix (in order to get it to stand out properly). Added to this was the mistake of not following my own advice and regularly checking back against reference tracks, so when I finally did do a side-by-side comparison with my usual reference material I’d created a kind of ‘inverted smile’ in terms of frequency spread… with piano and mid-range way too dominant, and not nearly enough bassline nor cymbals.

Once I figured out my mistake, it was pretty easily corrected with a simple application of Waves’ Renaissance Axx compressor (after having spent at least a week going in the wrong direction)… sure I had to sacrifice some of the nice dynamic I had originally wanted to highlight, but looking back, I think that original desire was misguided. The track I’m writing is in a minimal-techno style… where narrow dynamic and very loud overall track levels are commonplace… the expectation to keep a main acoustic instrument part fairly dynamic, and achieve a competitive level in the overall track was a bit unrealistic.

So 3 important lessons I learned for going forward…

  1. Audition parts in the context of a mix. Things that sound good on a soloed part may no longer sound so good, or even be completely lost in the context of a whole mix. I was too swayed by trying to work towards a soloed piano sound which I thought sounded good… it would have been better to have always auditioned it in the context of the mix right from the start.
  2. Be realistic about how much dynamic range you can achieve in styles which are innately highly compressed.
  3. Listen to and compare to your reference tracks regularly!

Mix Issues – Prevention Rather than Cure

Thanks to the rapid development of DAWs and plug-ins over the last 5-10 years, as producers we have close to unlimited flexibility in terms of audio processing.  Even my very old (7+ years) music PC is capable of running 10’s to 100’s of simultaneous plugins in a track’s project.  Added to this, the internal digital routing in a DAW, and the ever-increasing quality of plugins, means chains of 10’s of plugins are not only a reality but often the norm in putting together a track.

But with this flexibility can come a complacence to ‘fix problems later’ with plugins, rather than dealing with them at the source.  I’ve read numerous interviews with pro producers who emphasise the importance of getting sound right in the first instance… particularly with things like tracking… finding good sound through good mic selection and placement, rather than fixing it with EQ in the mix.  Yet, it can be easy to forget or ignore this advice given how simple it is to throw extra plugins in an effect chain.

While writing ‘Dystopia‘, I came into this kind of situation… a problem which could have been fixed by additional tweaking, or extra layers of compression… but which actually had a simple, and probably ultimately better sounding solution at the source.

The track has the following background percussion pattern in various sections…

Within an 8 beat phrase, the first percussion ‘hit’ occurs on the third 16th beat, and has a quieter and lower pitched ‘grace note’ a 16th before that.  The below screen shot shows the MIDI sequence for the pattern, with the grace notes highlighted…

mix-issues-prevention-rather-than-cure-1

At the point of final mixdown and applying bus compression, I noticed that there were occasional waveform spikes at the points of these grace notes… the highlighted peaks on the below waveform show an example…

mix-issues-prevention-rather-than-cure-2

These spikes were not only quite strong (almost hitting 0dB), but occurred on a rhythmically odd (syncopated) beat of the bar… i.e. the second 8th beat of the bar… at the same point as the offbeat hi-hat sound.  When I was trying to apply compression, the strength and syncopation of these spikes were causing the same type of uneven, pumping compression I mentioned in my second bus compression article.  The problem could have been cured at the final mix stage by potentially applying a limiter or a fast acting compressor at the start of the effect chain.  But instead, I went back to the MIDI sequencing and took at look at the part itself.  Considering the note at the second 8th beat was just a grace note, and that it occurred on the same beat as a rhythmically far more important part (i.e. the offbeat hi-hat), the MIDI velocity of that note seemed quite high (at around 81).  Hence, I tried simply reducing the velocity of the grace note to about 70 as per the below screen shot…

mix-issues-prevention-rather-than-cure-3

…and this simple change benefited the mix in 3 ways…

  • It left more room for the offbeat hi-hat, and hence made the hi-hat clearer.
  • It wasn’t in any way detrimental to the in-context sound of the percussion part (actually, I think it sounded better after the change).
  • It had the effect of removing those waveform peaks, and hence let the compressor work more smoothly and musically (see the ‘after’ waveform below)…

mix-issues-prevention-rather-than-cure-4

Ultimately, a simple changing of MIDI velocity fixed the problem, and was far easier to implement than extra layers of limiting and compression would have been (and also avoided the additional side-effects that limiting and compression could have introduced).

Clips of the ‘before’ and ‘after’ full mix are below…

Before

After

The interesting take-home from this experience, was to always think a bit ‘out of the box’ with regard to mix problems… to consider whether there’s a simple preventative measure that could avoid or correct the problem in the first instance.  In 99% of cases, as the pro producers advise, such a prevention is probably going to be easier and more effective than the equivalent cure.

Getting Comfortable With Your Environment 2

I’ve learnt a lot about the importance of subtle physical comforts in a space

I arrived back in Tokyo last week, and had my first day back into writing today, after about a two-week break over the new year.  As I wrote about over the last few posts, for whatever I reason, I wasn’t 100% settled working in Sydney this time, and although I came up with a couple of good ideas, I didn’t progress through with them as far as I would have liked.  It’s a bit strange, because it was the second period I had working in Sydney in 2016, and the first one was actually quite fruitful and productive.

However, getting back home and working in the place I’ve become accustomed to over the last year, it became more clear why I wasn’t so productive in Sydney this time… it broke down to 2 basic things… sound and comfort…

Sound, because I realised that I’ve really grown to know and trust the sound from my monitors and studio room in Tokyo.  After a year of working on here every day, I just know how the sound will translate to the final mix, and after having mixed a number of tracks that I’ve been happy with, it just boils down to confidence, and the resulting speed with which you can make tonal changes and mix decisions.  I just didn’t have the same confidence in Sydney… I knew there were a lot of parts that I couldn’t judge properly, and either kept changing them back and forth, or knew that I would have to fix them when I got home… and this led to everything taking longer, and a reduction in the ability to commit to a part and then move onto the next stage.  The room sound was probably a big contributor this too…  I blogged before about the uneven bass response in Sydney, and aswell I noticed on returning, as soon as I first walked into my apartment, just how much lower the ambient noise is here… likely it’s a lot to do with the construction (i.e. my apartment here is solid concrete on the walls, floor and roof, as compared to drywall and wooden floors in Sydney).  OK, admittedly domestic construction materials are not the most interesting thing in the world to blog about, but are important from a producer’s perspective, as it makes a huge difference to the room acoustics, and hence how well you can hear what you’re working on.

In retrospect, the other big factor in my lack of progress was comfort.  Sitting at my usual desk and comparing, I realised that in Sydney…

  • Screens were too far away and too high… felt like they were ‘looking down on me’ as I tried to work
  • Not enough leg room under the desk
  • The chair wasn’t as comfortable

…granted these are small (somewhat ‘precious’) things in isolation, but together they made a big difference to the level of comfort, and hence I think my propensity to be creative.  It was just nice today to slip back into familiar and comfortable surrounds, and in the couple of hours I worked today, I did as much as I would have in a whole day last month.

It’s fairly obvious that a good monitoring environment is crucial to your ability to mix and produce well (as I’ve now re-proven to myself), but moreso I’ve learnt a lot about the importance of subtle physical comforts in a space, and how it can really help or hinder your creativity.

Being Guided By Your Ears Not Your Eyes

With current DAW software, we have unlimited ability to use automation to hone aspects of sound to a micro level.  And, there is a huge difference in the detail of automation that’s possible today, even compared to relatively recent advancements in hardware technology (like flying faders on consoles)… these days it’s simple to setup unlimited complex routings of automation, based not just on user defined curves and patterns, but fed by audio from other tracks and sound sources.

The screen shot below shows a section of the Reaper project for ‘Cantana 1‘… this is the automation on a single reverse cymbal swell in  part of the track, and is automating volume and pan, plus the frequency and gain of a high-shelf filter.  Typically I would have at least 2 or 3 such sounds in parallel, at 20-30 different places throughout the track.

being-guided-by-your-ears-not-your-eyes-1

As with many technological improvements though, endlessly flexible automation can be a blessing and a curse.  Recently I’ve found that although being able to automate sound changes in such fine detail can make it easier to achieve highly professional-sounding productions, having such a detailed visual representation of automations can lead you to have an over-dependence on visual cues, and stop just using your ears and listening.  I find this particularly with creating the automation on incidentals like that in the screen shot above… having a visual instinct that automation curves should be linear or evenly progressive, and then tending to let that instinct override whether that type of curve actually sounds right or not in context.  The ‘shape’ of automation at a given point should be driven by the other sounds at that point, and not by having a curve which looks ‘nice’.

I find this also when auditioning parts of tracks and watching the main ‘arrange’ page of a DAW… it’s very easy to anticipate changes and parts that are coming up by their depiction on this screen, and this can prevent you from having an objective, and listener-centric opinion on those parts and changes.

I’ve also read countless interviews with pro producers in Sound on Sound and online who say similar things, and often try and switch off DAW screens when tracking and mixing to avoid this.

As this year’s progressed, and I’ve trusted my ears more and more, I’ve started becoming much more aware of how distracting visual cues in a DAW can be, and tried to more and more ignore them, and focus solely on what I’m hearing.

The First Listen of the Day

Recently I wrote about how our body is good at adjusting to make things we hear repeatedly, sound normal.  In the same way that as can cause a skewed frequency response to sound balanced, it can also desensitize us to things that don’t sound good.  For this reason, I find that the first time you listen to your work in the day, is really good for identifying things that maybe sounded right yesterday, but are actually wrong (in the cold light of day! 🙂 ).  For me this affects numerous elements of tracks, but the two most common are…

  • Instrument or percussion parts which sounded OK yesterday, but don’t fit, or just sound wrong on the first listen
  • After a mixing session where parts which seemed to fit properly yesterday, are way too loud

For that reason, I find that it’s critical to pay attention to your instincts during the first listen of the day.  Don’t be afraid to make seemingly excessive changes based on your ‘first listen’ instincts, even if you’re deviating far from decisions you made the previous day (I find this often happens during the mixing phase, where I agonised over differences of a fraction of a decibel the previous day, and end up cutting the whole part several dBs based on the first listen).

If you don’t act based on those instincts, it’s amazing how quickly those ‘wrong’ things start sounding right again, and as a consequence might end up sounding ‘wrong’ to a listener who’s hearing your track for the first time.

Finding Creative solutions to Mix Problems

Last month I wrote about how your ‘ear’ for identifying and fixing problems improves significantly when you dedicate yourself to producing full time.  Recently I had a situation which showed exactly this, and where my solution for fixing a problem was far different (and much more successful) than I would have come up with 9 months ago.

When I was writing ‘Cantana 1‘, I had come up with a patch for the main synth ‘stab’ sound…

The patch was made in V-Station using some FM between 2 of the oscillators, and I was fairly happy with the sound… thought that the FM gave a cool kind of gritty edginess to it.  But when it came to making the sound fit in the mix, it was really difficult to get it to properly stand out… it just seemed to get lost behind the other instruments and percussion.

I’d faced the same problem in the past (often with V-Station patches), and in those cases I’d often used large mid-range EQ boosts to try and correct the problem.  But this had also had limited success, often making the sound a bit ‘bloated’ and muddying up the mix.  When faced with this problem in the past, it could have quite possibly led me to abandon the sound altogether, just because I couldn’t get it to mix nicely.  I guess my thinking was along the lines of “it’s not fitting well, and I don’t know what else to do to fix it, so I’m just going to get rid of it”.

However, armed with the experience of the past year, plus the additional confidence that comes with that, I looked at the problem a bit more analytically…  The chord and the original patch I was using was quite low in terms of pitch, and as the FM was turned up quite high, there were a lot of ‘fizzy’ harmonics in the sound.  Hence, it seemed that the problem was a simple lack in mid-range frequency content… in the context of the track, the bass line and percussion were already supplying the low and high frequencies, and I needed this sound to ‘fill in the middle’, and provide the main theme.  But due to the patch and chord used, the mid-range was quite lacking… EQ would likely not have fully solved the problem too… you can’t EQ frequencies that aren’t in a sound to begin with.

In this case, I used a second instance of V-Station with a similar patch, but one with no FM and whose oscillators were much more centred around the mid-range.  It had a much cleaner and more rounded sound…

I fed both V-Station instances from the same MIDI track, and blended the V-Station audio outputs.  The result was as follows…

Whilst in isolation I actually prefer the original FM patch, the blended version was much easier to fit into the mix, and saved a lot of headaches trying to correct things with EQ (and potentially tedious automation of the EQ to adjust to the filter sweeps used on this instrument).

In retrospect, it was nice to see that I’d discovered more creative solutions to problems, and was able to analyze a problem to provide a solution, rather than giving up… my thinking was more along the lines of “there’s a problem here… now what’s causing it”, and this led to a preventative solution, rather than the corrective (and likely less successful) solution of messing with EQ.  It shows that (as mentioned in the previous post) your mixing and producing skills can really improve with dedicated and regular practice.