Improving the Clarity of a Mix

Isolating and bringing out individual parts when mixing, and improving the overall clarity of a track can be challenging as an amateur producer. It’s easy to mistakenly believe that there is a single ‘magic’ solution, that through lack of experience you don’t know about. The reality is that magic solutions rarely exist, and achieving improved mix clarity is usually the result of a series of small changes, which sound insignificant in isolation, but combine to make to fairly major change to the mix.

Cantana 2 was the first track I’d written which used an acoustic sound (i.e. piano) for its main theme. This presented some new challenges in terms of getting a fairly dynamic acoustic sound to sufficiently stand out over other parts. In this post I’m going to go through a series of small changes I used which helped me to get the piano sitting much more prominently in the mix, to separate it from the pad sound in the track, and to improve the overall clarity of the mix.

The starting point is a clip from the very early stages of sequencing and mixing the track. At this stage there was little delay, and no reverb across mix (hence the fairly raw sound compared to the final version on soundcloud)…

Initial clip

The first step to try to bring out the piano was to apply a compressor to it. I used Waves Renaissance Axx with the following settings…


…which evened out the general level of the piano and made it a little more ‘intelligible’ (apologies for the loss of one channel of the pad during the first part of the clip)…

Compression on piano

Next I applied EQ to both the piano and pad sounds, using the following curves. Notice that the 2 curves are complimentary, in that they accentuate different frequency ranges in each sound…

Pad EQ
Piano EQ

EQ on piano and pad

Next I used Voxengo MSED to slightly reduce the sides component of both sounds. Often to separate sounds you can use opposing settings on each (i.e. one wider and one narrower, to separate them). In this case I felt that both the piano and pad were a bit too wide, and were getting lost against the bass and drums, and the pad especially was dropping too much level when the track was monoed. I reduced the sides component of the pad and piano by 2.6dB and 2dB respectively…

Reduced sides component on piano and pad

I felt like there were still too much ‘mud’ in the mix, and a big contributor to this was that both these main sounds were competing in the low/mid range. High pass filtering the piano made it sound a bit synthetic and unnatural, so I added a high pass filter at around 400Hz to the existing EQ curve on the pad…


High-pass filter on pad

Using compression sidechained by the bass drum on instrument sounds has been a well used technique in electronic styles for a while. In this case I used Noisebud’s ‘Lazy Kenneth’ to simulate the effect of sidechained compression on the pad, to make a bit more general ‘space’ for the other sounds…


(Simulated) sidechained compression on pad

I was still not happy with the clarity of the pad sound. When creating and auditioning it in isolation I’d used a low-pass filter with quite a lot of resonance. This sounded good on it’s own, but was not sitting well in the mix. I was one of the filter modules in Kontakt, and I reduced the resonance amount from 46 to 31% (and made a similar, proportional change in places where the resonance was automated)…

Reduced pad filter resonance

This final step in this series of changes was to try and further separate the pad and piano by using volume automation to drop the pad level by 1dB whenever the piano was playing…


Volume automation on pad

Ultimately I used further tweaks and processing after this to arrive at the final mix, but this series of steps shows the main changes I made to try and separate out the pad and piano. Listening to the first and the last clip, there’s a significant difference in the overall clarity of the mix (and even moreso comparing the first clip to the final mix on soundcloud).

Hopefully this gives some insights and ideas on techniques to improve your mixes, and demonstrates that usually it’s the sumtotal of multiple subtle changes that give an overall significant difference in the clarity and quality of a mix.


Noise Reducing Percussion Samples

A quick tech ‘how to’ post today… around noise reduction in live-recorded samples.  As I’ve mentioned previously, I use a lot of live recorded sounds in my tracks, especially live recorded percussive sounds.  Sometimes these sounds can be recorded quietly in the studio, but other times I capture them ‘on location’, and hence have to work with background noise.  On other occasions the background noise is unavoidably entwined with the sound source.  This was the case today when I recorded some tom sounds from the Volca Beats speaker.  The direct sound from this Volca is fairly useable , but the speaker is really hissy, and hence I ended up with a lot of hiss in the sample…

The raw sample

In the past I used to try and remove this in one of two ways…

  1. Use a high shelf filter to reduce the hiss
  2. Use a gate to fade out the tail of the sample

…but neither of these were ideal… the filter option would remove high frequency content from the whole sample including the attack part (which can significantly alter the sound).  The gate avoids that problem, but requires that you find a trade off between cutting the low frequency content of the tail (using a short release time), and ending up with audible hiss remaining in the tail (using a longer release time).

But, using automation you can combine the above approaches and get a much better result than either in isolation.  The trick is to use a high shelf filter, but automate the gain/level control, so that it’s very quickly attenuated just after the attack of the sound is finished.  The screens below demonstrate the setup in Reaper.  First you import the sample into an empty track.  Then add a high shelf filter into the FX chain (I’m using Reaper’s built-in ‘ReaEQ’ below to keep things simple).  Then automate the gain/level control of the filter (using the ‘Track Envelopes/Automation’ button on the track control)…

Reaper track ‘Envelopes/Automation’ window

Then draw an automation curve as shown in the below screen…

Automation curve
ReaEQ settings (‘Gain’ is controlled by the automation)

Depending on the nature of the sample, you’ll want to try adjusting the 4 highlighted parameters above to get the noise-reduced version sounding right…

  • The point where the filter starts to drop
  • The time the filter takes to get to minimum gain, and the shape of the curve (above option is using the Reaper ‘fast start’ point shape)
  • The frequency and bandwidth/Q of the filter

If it’s an excessively noisy sample, a low pass filter might also work better than high shelf.

In this case, the same sample with the above settings turned out like this…

The ‘noise-reduced’ sample

… that’s a considerable amount of noise reduction, but has maintained all the attack and general timbre of the sound.

The ‘Rule of 3s’ for Incidentals

A month or so ago, I read an article on musicradar article entitled ‘Robin Schulz’s top 10 tips for producers‘.  I hadn’t heard of Robin until that point, but I really resonated with the advice that he was giving… the stuff he covered was generally also the stuff I tend to think of as key techniques to producing successfully.  I checked out one of his tracks on YouTube too… ‘Prayer In C‘.  The track has an incidental build almost right at the start (0:05)… but surprising for commercial music, the texture of this build is quite thin and sparse… consisting of mainly just a lone white noise sweep, and tends to come in a bit predictably.  It’s similar to the kind of use of this sound that you find in more amateur, ‘bedroom’-type productions.  I’m not at all trying to be critical of the production (I wouldn’t have a leg to stand on as obviously Robin is at least 1000x more famous than me!)… but it’s interesting to observe, because it does stand-out somewhat from most highly polished and produced tracks from big names.

It got me thinking about things that distinguish ‘bedroom’ sounding productions from those from big names on big labels… and one of the major differences from my perspective is the use and depth of incidental sounds.  My general impression is that highly ‘professional’ sounding tracks tend to have multiple layers of complexly woven and sculpted incidental sounds… the kind of thing that adds a subtle but critical sheen of additional depth and detail to a track.  A really good example of this that comes to mind is Sasha’s ‘Invol2ver’ album.  The interesting thing about these types of incidentals is that you don’t usually explicitly hear them when listening to a track… but if they’re removed, suddenly something major is missing and the track sounds much less polished and professional.

Along these lines, for all the tracks I worked on during 2016, I adopted an approach with incidental sounds which I’ve since come to refer to as ‘the rule of 3s’.  That is… for any major build or transition point in a track, I try to have at least 3 separate layers of incidental sounds happening at the one time.  The reason for this… having just 1 or 2 layers of incidentals at such points seems to end up being too obvious… the listener can distinguish each of the layers and the build becomes somewhat predictable.  But for me, 3 layers is the sweet spot where suddenly the layers of incidental, along with whatever instrument sounds are in the track itself, combine to make it difficult for the listener to be conscious of all the layers individually… the sound becomes harder to predict and hence more interesting.

So based on this thinking, I try to make sure I use at least 3 layers of incidental sound at any major build or transition in a track.  You have to temper that according to the style aswell… progressive-type tracks tend to do well with more layers of incidental than harder, more minimal styles… but I think 3 layers is a good baseline to follow.  As a typical default, I would have those 3 layers consist of…

  • A longer white noise swell-type sound
  • A shorter swell (e.g. reversed cymbal)
  • Some type of percussion, possibly through a delay effect

…and make sure that each layer has individual panning to control the side to side depth, aswell as EQ automation to control the front to back.

As an example, the clip below contains the soloed incidental parts from the build starting at 2:43 in Summer Wave

This actually contains about 5 layers on the build/swell side (3 swell-type sounds plus 2 percussive), and 2 or 3 crash cymbal-type sounds layered together… that’s leaning towards being a bit excessive, but also gives the track a lot of depth, and that more ‘professional’ sound I mentioned earlier (and given the more progressive style it lends itself to greater depth of incidental sounds).

If you’re a producer, striving to make your tracks sound more professional or polished, I’d highly recommend you look at your use of incidental sounds… and if you’re only using a couple of layers consider thickening the texture and apply the ‘rule of 3s’.


(Disclaimer: A acknowledge that these days the term ‘bedroom productions’ has no correlation with being amateur or unprofessional… as many famous commercial productions are indeed conceived and realized in a bedroom!)

Being Guided By Your Ears Not Your Eyes

With current DAW software, we have unlimited ability to use automation to hone aspects of sound to a micro level.  And, there is a huge difference in the detail of automation that’s possible today, even compared to relatively recent advancements in hardware technology (like flying faders on consoles)… these days it’s simple to setup unlimited complex routings of automation, based not just on user defined curves and patterns, but fed by audio from other tracks and sound sources.

The screen shot below shows a section of the Reaper project for ‘Cantana 1‘… this is the automation on a single reverse cymbal swell in  part of the track, and is automating volume and pan, plus the frequency and gain of a high-shelf filter.  Typically I would have at least 2 or 3 such sounds in parallel, at 20-30 different places throughout the track.


As with many technological improvements though, endlessly flexible automation can be a blessing and a curse.  Recently I’ve found that although being able to automate sound changes in such fine detail can make it easier to achieve highly professional-sounding productions, having such a detailed visual representation of automations can lead you to have an over-dependence on visual cues, and stop just using your ears and listening.  I find this particularly with creating the automation on incidentals like that in the screen shot above… having a visual instinct that automation curves should be linear or evenly progressive, and then tending to let that instinct override whether that type of curve actually sounds right or not in context.  The ‘shape’ of automation at a given point should be driven by the other sounds at that point, and not by having a curve which looks ‘nice’.

I find this also when auditioning parts of tracks and watching the main ‘arrange’ page of a DAW… it’s very easy to anticipate changes and parts that are coming up by their depiction on this screen, and this can prevent you from having an objective, and listener-centric opinion on those parts and changes.

I’ve also read countless interviews with pro producers in Sound on Sound and online who say similar things, and often try and switch off DAW screens when tracking and mixing to avoid this.

As this year’s progressed, and I’ve trusted my ears more and more, I’ve started becoming much more aware of how distracting visual cues in a DAW can be, and tried to more and more ignore them, and focus solely on what I’m hearing.