When the Problem is Staring You in the Face

I had an interesting experience over the last couple of weeks, with a mixing problem that should have have been obvious and easy to fix, but because I was too focused on details, I missed the bigger picture and let the problem persist for way longer than it should have.

I’m still in the finishing off stage of a track which has ended up becoming the most drawn out and time consuming piece I’ve worked on so far. I just looked back to previous posts and realised I said I was on the ‘home straight’ with it more than 2 months ago.

Part of the reason this track took longer than others was that it was the first where I’d used an acoustic instrument for one of the main themes… an acoustic piano riff (from NI’s ‘New York Grand’). As with acoustic percussion samples I’ve discussed in a previous post, any recorded acoustic instrument is inherently going to have a much greater dynamic range than synthetic sound. And to fit this into the generally very narrow dynamic of club music, considerable but careful application of compression is required.

The piano riff I came up with, I thought, had a nice dynamic… getting thicker in texture and a bit louder/stronger towards the end of the riff… I felt this gave it a bit greater feeling of tension. Although a fair amount of compression would be required to make the riff fit well in the mix, I was keen to try and preserve as much of that dynamic as possible. Hence when mixing I was too focused on trying to preserve dynamic of the riff that I’d liked in the soloed part. This unfortunately led me to being too cautious in applying compression, and ended up pushing the piano part way too high in the mix (in order to get it to stand out properly). Added to this was the mistake of not following my own advice and regularly checking back against reference tracks, so when I finally did do a side-by-side comparison with my usual reference material I’d created a kind of ‘inverted smile’ in terms of frequency spread… with piano and mid-range way too dominant, and not nearly enough bassline nor cymbals.

Once I figured out my mistake, it was pretty easily corrected with a simple application of Waves’ Renaissance Axx compressor (after having spent at least a week going in the wrong direction)… sure I had to sacrifice some of the nice dynamic I had originally wanted to highlight, but looking back, I think that original desire was misguided. The track I’m writing is in a minimal-techno style… where narrow dynamic and very loud overall track levels are commonplace… the expectation to keep a main acoustic instrument part fairly dynamic, and achieve a competitive level in the overall track was a bit unrealistic.

So 3 important lessons I learned for going forward…

  1. Audition parts in the context of a mix. Things that sound good on a soloed part may no longer sound so good, or even be completely lost in the context of a whole mix. I was too swayed by trying to work towards a soloed piano sound which I thought sounded good… it would have been better to have always auditioned it in the context of the mix right from the start.
  2. Be realistic about how much dynamic range you can achieve in styles which are innately highly compressed.
  3. Listen to and compare to your reference tracks regularly!

Spending Time Appropriately

I’ve blogged before several times about sample manipulation and clean up (i.e. EQ, compression, gating, etc…).  I use a lot of live-sampled sounds in my tracks, often as main/key elements in the overall composition.  So in these cases, properly cleaning up and compressing samples is really important, if I want these elements to stand out well and sound clean in the mix.

For the track I’m working on at the moment, I’m in the middle of doing all the incidentals, and part of that is incidental percussion.  I usually first sequence all of these sounds in the track’s Reaper project, and afterwards go through and automate EQ and compression to get them sitting in the mix properly (usually with a single EQ and compressor instance and using automation to adjust the parameters for each incidental).  That process can be pretty tedious, so I decided this time I’d do all the sample EQ and compression up front (i.e. before sequencing) in audio editing software.  This also was pretty daunting at first, because I had about 20 samples to treat, and I usually spend a good 5-10 minutes per sample finding the right EQ and compression settings, and auditioning on monitors and close listening through headphones.  But as I started working through them, I realised I didn’t need to spend so long on each one…

For samples which are used as key elements in the track, overly careful EQ and compression is really important… your key elements will either comprise the main ‘themes’ of a track, or at a minimum occur many times during the track’s progression… hence you need to spend time making fine and careful adjustments to get them sounding as good as possible.  On the other hand, the incidental samples I was working on might play in total a couple of times at most during the entire progression… plus they often occur at sonically ‘busy’ parts of the track (builds, peak points, etc…), where slight quality issues (like a tiny bit of distortion, or slightly wrong EQ) will likely be masked by all the other sounds occurring at the same time.

It made me realise I could afford to be (what I would usually consider) a bit ‘sloppy’ with my approach to this sample editing… only audition on monitors, and sometimes running mild effects like a slight limiting or compression with minimal or no auditioning (I use Waves L1 and C1 a lot for this purpose, and have used them so much I can usually apply mild adjustments without needing to audibly auditioning).  And more generally it made me think about using your time appropriately.  Time is a precious commodity for a producer… particularly if it’s not your profession and you have limited time to start with.  So you need to really think about the areas where it’s necessary to spend time, and the areas where you can afford to take a more ‘quick fix’ approach.

Finding Creative solutions to Mix Problems

Last month I wrote about how your ‘ear’ for identifying and fixing problems improves significantly when you dedicate yourself to producing full time.  Recently I had a situation which showed exactly this, and where my solution for fixing a problem was far different (and much more successful) than I would have come up with 9 months ago.

When I was writing ‘Cantana 1‘, I had come up with a patch for the main synth ‘stab’ sound…

The patch was made in V-Station using some FM between 2 of the oscillators, and I was fairly happy with the sound… thought that the FM gave a cool kind of gritty edginess to it.  But when it came to making the sound fit in the mix, it was really difficult to get it to properly stand out… it just seemed to get lost behind the other instruments and percussion.

I’d faced the same problem in the past (often with V-Station patches), and in those cases I’d often used large mid-range EQ boosts to try and correct the problem.  But this had also had limited success, often making the sound a bit ‘bloated’ and muddying up the mix.  When faced with this problem in the past, it could have quite possibly led me to abandon the sound altogether, just because I couldn’t get it to mix nicely.  I guess my thinking was along the lines of “it’s not fitting well, and I don’t know what else to do to fix it, so I’m just going to get rid of it”.

However, armed with the experience of the past year, plus the additional confidence that comes with that, I looked at the problem a bit more analytically…  The chord and the original patch I was using was quite low in terms of pitch, and as the FM was turned up quite high, there were a lot of ‘fizzy’ harmonics in the sound.  Hence, it seemed that the problem was a simple lack in mid-range frequency content… in the context of the track, the bass line and percussion were already supplying the low and high frequencies, and I needed this sound to ‘fill in the middle’, and provide the main theme.  But due to the patch and chord used, the mid-range was quite lacking… EQ would likely not have fully solved the problem too… you can’t EQ frequencies that aren’t in a sound to begin with.

In this case, I used a second instance of V-Station with a similar patch, but one with no FM and whose oscillators were much more centred around the mid-range.  It had a much cleaner and more rounded sound…

I fed both V-Station instances from the same MIDI track, and blended the V-Station audio outputs.  The result was as follows…

Whilst in isolation I actually prefer the original FM patch, the blended version was much easier to fit into the mix, and saved a lot of headaches trying to correct things with EQ (and potentially tedious automation of the EQ to adjust to the filter sweeps used on this instrument).

In retrospect, it was nice to see that I’d discovered more creative solutions to problems, and was able to analyze a problem to provide a solution, rather than giving up… my thinking was more along the lines of “there’s a problem here… now what’s causing it”, and this led to a preventative solution, rather than the corrective (and likely less successful) solution of messing with EQ.  It shows that (as mentioned in the previous post) your mixing and producing skills can really improve with dedicated and regular practice.

 

Cleaning Up a Mix

there’s usually not one magic fix in order to realise a fairly abstract goal like ‘make the mix clearer’

Over the last week I’ve been finalizing the mix of a new track (Summer Wave).  In terms of sound texture, it’s the ‘thickest’ track I’ve written this year, with quite a lot of instrument and percussion layers mixed together.  The thicker the texture of a track gets, the more challenging the mixing process becomes, as you’ve got more layers of sound, and more frequencies competing to be heard in a limited space.  Hence, early in the process when I started with a rough sequenced mix, one of the first things I wanted to do was clean up the mix… to remove ‘mud’ and make the individual layers more distinct and audible.  Generally I find that in removing ‘mud’ from a mix, there’s usually no one ‘silver bullet’ solution, and the improvement comes from repeated iterations of small fixes.  That was the case here, but there were 2 changes which both made a significant improvement to cleaning up the mix.

The rough mix sounded like this…

…not too bad for a first cut, but i wanted the individual elements to be clearer.  While doing some cleanup work on some of the individual layers, I soloed this ‘glass bottle’ track (so named because it came from a sample of a glass bottle being tapped on a tiled floor)…

I was surprised at how much low frequency content there was in this part… especially because i usually high pass filter the raw samples of sounds like this long before I get to the mixing stage.  The sample had a loud transient ‘thud’ sound at the start at approx 135Hz.  This sat right in the frequency range of both the bass line and the ‘meat’ of the bass drum, and given the ‘glass bottle’ sound had been included for its high frequency, bell-like rhythmic pattern, this sound down around 135 Hz was redundant, and was probably just ‘muddying’ the sound of the bass drum and bass line.  I initially applied a high pass filter at ~300Hz, but after a few more iterations of review decided I could set it at 518Hz without detracting in any way from the part of the glass bottle sound I wanted to hear.  The soloed glass bottle sounded like this with 518Hz high pass filter applied…

The full mix after this change, sounded like this….

Granted its subtle, but to me there’s a definite improvement in the ‘smoothness’ of the bass line (because the rhythmic pulsing at around 135 Hz caused by the glass bottle pattern has been removed).  And importantly, as discussed at the start of the post, it’s an important step in the iterative process of cleaning up the overall sound.  (Note – to more clearly hear the ‘smoothing’ in the final full mix, download the before and after mix clips and A/B them with a low pass filter at about 200Hz).

More towards the end of the mix process, i was reasonably happy with the overall sound of the mix on my monitors, but i felt that the synth ‘stab’ sound was not clear enough in the mix when auditioned through my tablet and earbuds.  The mix at this point sounded like this…

After soloing some of the parts, i realised that one of the background percussion parts (sourced from a sample of an aluminium coke can) had a note which played at the same time as the synth stab…

Coke can…

Synth stab…

The problem was that the fundamental of that first coke can note was at 221Hz (the A below middle C), and that same A was one of the notes in the synth stab chord.  Basically the 2 sounds were competing for the same frequency space.  Give that first note of the coke can was really just a grace note to the second higher and more prominent note, I made a 3.3dB cut at 221Hz on the coke can track, which resulted in…

And sounded like this in the context of the whole mix…

To me this made a pretty significant contribution to allowing the stab sound to sit more clearly in the mix.

Again, my experience is that there’s usually not one magic fix in order to realise a fairly abstract goal like ‘make the mix clearer’.  But through iterative and successive iterations of fixes like those above, high-level overall improvements can be achieved.