Anticipation

I touched on anticipation in a previous post… i.e. the idea of adjusting aspects of your track early in the production phase, in anticipation of the effect of subsequent phases and processes. I hadn’t really considered this concept much until I wrote that post, but it’s recently got me thinking more about anticipation, and where else I could use it to improve my tracks and production process.

My tracks are becoming too complex. My recent Cantana tracks are a case in point. I wrote both of them with the intention of adopting a kind of ‘back to basics’ / ‘kiss‘ type of approach… i.e. minimal style, fairly sparse instrumentation with few sound layers… the idea being to see how quickly and simply I could put a reasonable-sounding track together using the skills I honed over 2016. Ironically, they turned out to be the exact opposite… among the most time consuming, and complex (in terms of production and process, if not sound) tracks I’d ever written. Reflecting on this, I think part of my problem is not anticipating things properly… Cantana 2 in particular ended up with around 5-6 layers of low-level background instrumental and percussion sounds, intended to add some depth and complexity to the sound. I added all these layers in the very early stages of writing before I’d added any spatial effects like reverb and delay. The problem was that when I did come to add reverbs and delays towards the end of the mix process, doing so just made the mix and general sound too full. There were already so many of these small parts and sounds in the background of the track, that were was not enough room for them to co-exist with the reverbs and delays. To fix it I basically ended up just pushing down the level of all these small parts in the mix… to the point that I wonder whether removing them completely would have sounded any different.

My mistake was not anticipating the effect of reverb and delays at the early stage. I think I was so caught up in worrying that the mix sounded too thin, that I just kept adding more and more of these small layers, not realising that a lot of the space they were consuming would have been consumed by spatial effects anyway. And it caused me headaches in the long run, because as a result of all these layers, the track turned out very dense both in terms of frequency and dynamic, making it more difficult to mix (i.e. compress, EQ, and level, and to get enough separation between the different layers).

Ofcourse another potential remedy to the problem is to actually build in the reverbs and delays right from the start. I expect this is the approach a lot of other producers would take. For me, for some reason I do like having some distinction between ‘phases’ of the production process… I think also because I’m working on an older PC, and there’s a benefit to consolidating and sharing CPU-hungry effects (especially reverb) between multiple sound elements (which is easier to do in a single stage).

Going forward I’m going to make a conscious effort to allow the track to sound thinner and more sparse than I think it should, in the early stages of coming up with ideas… anticipating that these gaps will be filled by the addition if things like reverb and delay later in the process. In fact to help enforce that idea I’m going to try and compose the track 95% using only Korg Volcas, which implicitly limits the number of layers I can create.

Hopefully that will allow me to finally move towards tracks which are simpler, and quicker to finish.

Improving the Clarity of a Mix

Isolating and bringing out individual parts when mixing, and improving the overall clarity of a track can be challenging as an amateur producer. It’s easy to mistakenly believe that there is a single ‘magic’ solution, that through lack of experience you don’t know about. The reality is that magic solutions rarely exist, and achieving improved mix clarity is usually the result of a series of small changes, which sound insignificant in isolation, but combine to make to fairly major change to the mix.

Cantana 2 was the first track I’d written which used an acoustic sound (i.e. piano) for its main theme. This presented some new challenges in terms of getting a fairly dynamic acoustic sound to sufficiently stand out over other parts. In this post I’m going to go through a series of small changes I used which helped me to get the piano sitting much more prominently in the mix, to separate it from the pad sound in the track, and to improve the overall clarity of the mix.

The starting point is a clip from the very early stages of sequencing and mixing the track. At this stage there was little delay, and no reverb across mix (hence the fairly raw sound compared to the final version on soundcloud)…

Initial clip

The first step to try to bring out the piano was to apply a compressor to it. I used Waves Renaissance Axx with the following settings…

improving-the-clarity-of-a-mix-1

…which evened out the general level of the piano and made it a little more ‘intelligible’ (apologies for the loss of one channel of the pad during the first part of the clip)…

Compression on piano

Next I applied EQ to both the piano and pad sounds, using the following curves. Notice that the 2 curves are complimentary, in that they accentuate different frequency ranges in each sound…

improving-the-clarity-of-a-mix-2
Pad EQ
improving-the-clarity-of-a-mix-3
Piano EQ

EQ on piano and pad

Next I used Voxengo MSED to slightly reduce the sides component of both sounds. Often to separate sounds you can use opposing settings on each (i.e. one wider and one narrower, to separate them). In this case I felt that both the piano and pad were a bit too wide, and were getting lost against the bass and drums, and the pad especially was dropping too much level when the track was monoed. I reduced the sides component of the pad and piano by 2.6dB and 2dB respectively…

Reduced sides component on piano and pad

I felt like there were still too much ‘mud’ in the mix, and a big contributor to this was that both these main sounds were competing in the low/mid range. High pass filtering the piano made it sound a bit synthetic and unnatural, so I added a high pass filter at around 400Hz to the existing EQ curve on the pad…

improving-the-clarity-of-a-mix-4

High-pass filter on pad

Using compression sidechained by the bass drum on instrument sounds has been a well used technique in electronic styles for a while. In this case I used Noisebud’s ‘Lazy Kenneth’ to simulate the effect of sidechained compression on the pad, to make a bit more general ‘space’ for the other sounds…

improving-the-clarity-of-a-mix-5

(Simulated) sidechained compression on pad

I was still not happy with the clarity of the pad sound. When creating and auditioning it in isolation I’d used a low-pass filter with quite a lot of resonance. This sounded good on it’s own, but was not sitting well in the mix. I was one of the filter modules in Kontakt, and I reduced the resonance amount from 46 to 31% (and made a similar, proportional change in places where the resonance was automated)…

Reduced pad filter resonance

This final step in this series of changes was to try and further separate the pad and piano by using volume automation to drop the pad level by 1dB whenever the piano was playing…

improving-the-clarity-of-a-mix-6

Volume automation on pad

Ultimately I used further tweaks and processing after this to arrive at the final mix, but this series of steps shows the main changes I made to try and separate out the pad and piano. Listening to the first and the last clip, there’s a significant difference in the overall clarity of the mix (and even moreso comparing the first clip to the final mix on soundcloud).

Hopefully this gives some insights and ideas on techniques to improve your mixes, and demonstrates that usually it’s the sumtotal of multiple subtle changes that give an overall significant difference in the clarity and quality of a mix.

 

Adjusting Effect Levels for Mix/Bus Compression

I spent a few hours yesterday doing final bus compression for the track I’m currently working on. Approaches to and techniques for bus compression were one of the things I learnt most about during 2016, and yesterday I had a kind-of ‘lightbulb’ moment, which will hopefully lead to better results in this area going forward.

I’m a ‘reluctant participant’ in the whole competitive levels/loudness wars thing. Fundamentally I like the groove, emotion, impact, etc which a decent dynamic range can impart on a track. But at the same time I understand the need to achieve an overall loudness level that’s similar to other tracks in the same genre (especially because not doing so simply makes your music difficult for DJs to mix).

In the past, I’d always equated greater amounts of bus compression to a loss in clarity. To some extent this is true, as compression will narrow the dynamic range of the sound and hence simply reduce the ‘depth’ of volume variation available. However I’d always found that compressing the entire mix necessitated a compromise of getting closer to competitive levels while sacrificing some detail and clarity.

About halfway through last year I had a mini breakthrough of sorts, when I realised certain settings on bus compressor plugins can have a big effect on the quality of the resulting audio. Specifically I usually use Cytomic’s ‘The Glue’ as the first stage in the bus compression chain, and I found that simply setting the oversampling rate to the recommended or higher levels (4x or more when auditioning) gave far clearer audio quality than the default lower settings.

For my current track I had spent a bit longer than usual honing the reverb plugin settings, and fine tuning the reverb send levels. After this I was really happy with the result… it had a nice balance of having a good depth/space with sounding too ‘washed out’, and seemed to translate well to several different sets of speakers and headphones. But yesterday it was a bit disappointing to have some of this clarity and balance lost when I started pushing the final mix through bus compression. When I listened closely it wasn’t so much a by-product of compression, but more that the levels of the reverbs and delay effects were stronger. When I thought about it, the reasoning was obvious… I’d squashed down the top 3-6 dB of the volume range, so obviously sounds down at -15 to -20dB (like the reverb layer) had been effectively pushed up by a similar amount.

I usually do final bus compression in a separate Reaper project to the mixing, using just the final stereo mixdown as a source track (my aging PC can’t handle multiple reverb plugins and CPU hungry bus compression at the same time). So I went back to the mix project and rendered another version of the stereo mix with reverbs and main delays turned down around 1.5dB. Running this new version through the same compression chain resulted in a much clearer mix… it sounded a lot more like the former original stereo mixdown… just louder (which is exactly what I was trying to achieve).

Anyway, in hindsight I’m a bit surprised it’s taken me this long to figure out this technique (the basic point of compression after all is to reduce dynamic range), but I’m going to experiment a bit more, and hopefully end up with a lot cleaner, clearer final mix than for past tracks.

Another way to potentially prevent the issue could be to ‘mix into’ a compressor or limiter during writing/sequencing/mixing. This is a bit unorthodox technique historically, but seems to have gained popularity in the last few years (I seem to have read a lot of articles recently where people discuss working this way). The idea is to put a limiter/compressor on the master bus right from the early stages of writing (using generic/default settings close to what you’d usually use for final bus compression). This way you’re always evaluating level balance with compression already ‘baked in’. I don’t usually use this technique because for some reason I like to keep a clear separation between the mixing and final ‘mastering’ stages… but based on yesterday’s experience I can definitely see the merits, so may try it in a future track.

Mix Issues – Prevention Rather than Cure

Thanks to the rapid development of DAWs and plug-ins over the last 5-10 years, as producers we have close to unlimited flexibility in terms of audio processing.  Even my very old (7+ years) music PC is capable of running 10’s to 100’s of simultaneous plugins in a track’s project.  Added to this, the internal digital routing in a DAW, and the ever-increasing quality of plugins, means chains of 10’s of plugins are not only a reality but often the norm in putting together a track.

But with this flexibility can come a complacence to ‘fix problems later’ with plugins, rather than dealing with them at the source.  I’ve read numerous interviews with pro producers who emphasise the importance of getting sound right in the first instance… particularly with things like tracking… finding good sound through good mic selection and placement, rather than fixing it with EQ in the mix.  Yet, it can be easy to forget or ignore this advice given how simple it is to throw extra plugins in an effect chain.

While writing ‘Dystopia‘, I came into this kind of situation… a problem which could have been fixed by additional tweaking, or extra layers of compression… but which actually had a simple, and probably ultimately better sounding solution at the source.

The track has the following background percussion pattern in various sections…

Within an 8 beat phrase, the first percussion ‘hit’ occurs on the third 16th beat, and has a quieter and lower pitched ‘grace note’ a 16th before that.  The below screen shot shows the MIDI sequence for the pattern, with the grace notes highlighted…

mix-issues-prevention-rather-than-cure-1

At the point of final mixdown and applying bus compression, I noticed that there were occasional waveform spikes at the points of these grace notes… the highlighted peaks on the below waveform show an example…

mix-issues-prevention-rather-than-cure-2

These spikes were not only quite strong (almost hitting 0dB), but occurred on a rhythmically odd (syncopated) beat of the bar… i.e. the second 8th beat of the bar… at the same point as the offbeat hi-hat sound.  When I was trying to apply compression, the strength and syncopation of these spikes were causing the same type of uneven, pumping compression I mentioned in my second bus compression article.  The problem could have been cured at the final mix stage by potentially applying a limiter or a fast acting compressor at the start of the effect chain.  But instead, I went back to the MIDI sequencing and took at look at the part itself.  Considering the note at the second 8th beat was just a grace note, and that it occurred on the same beat as a rhythmically far more important part (i.e. the offbeat hi-hat), the MIDI velocity of that note seemed quite high (at around 81).  Hence, I tried simply reducing the velocity of the grace note to about 70 as per the below screen shot…

mix-issues-prevention-rather-than-cure-3

…and this simple change benefited the mix in 3 ways…

  • It left more room for the offbeat hi-hat, and hence made the hi-hat clearer.
  • It wasn’t in any way detrimental to the in-context sound of the percussion part (actually, I think it sounded better after the change).
  • It had the effect of removing those waveform peaks, and hence let the compressor work more smoothly and musically (see the ‘after’ waveform below)…

mix-issues-prevention-rather-than-cure-4

Ultimately, a simple changing of MIDI velocity fixed the problem, and was far easier to implement than extra layers of limiting and compression would have been (and also avoided the additional side-effects that limiting and compression could have introduced).

Clips of the ‘before’ and ‘after’ full mix are below…

Before

After

The interesting take-home from this experience, was to always think a bit ‘out of the box’ with regard to mix problems… to consider whether there’s a simple preventative measure that could avoid or correct the problem in the first instance.  In 99% of cases, as the pro producers advise, such a prevention is probably going to be easier and more effective than the equivalent cure.

Getting Comfortable With Your Environment 2

I’ve learnt a lot about the importance of subtle physical comforts in a space

I arrived back in Tokyo last week, and had my first day back into writing today, after about a two-week break over the new year.  As I wrote about over the last few posts, for whatever I reason, I wasn’t 100% settled working in Sydney this time, and although I came up with a couple of good ideas, I didn’t progress through with them as far as I would have liked.  It’s a bit strange, because it was the second period I had working in Sydney in 2016, and the first one was actually quite fruitful and productive.

However, getting back home and working in the place I’ve become accustomed to over the last year, it became more clear why I wasn’t so productive in Sydney this time… it broke down to 2 basic things… sound and comfort…

Sound, because I realised that I’ve really grown to know and trust the sound from my monitors and studio room in Tokyo.  After a year of working on here every day, I just know how the sound will translate to the final mix, and after having mixed a number of tracks that I’ve been happy with, it just boils down to confidence, and the resulting speed with which you can make tonal changes and mix decisions.  I just didn’t have the same confidence in Sydney… I knew there were a lot of parts that I couldn’t judge properly, and either kept changing them back and forth, or knew that I would have to fix them when I got home… and this led to everything taking longer, and a reduction in the ability to commit to a part and then move onto the next stage.  The room sound was probably a big contributor this too…  I blogged before about the uneven bass response in Sydney, and aswell I noticed on returning, as soon as I first walked into my apartment, just how much lower the ambient noise is here… likely it’s a lot to do with the construction (i.e. my apartment here is solid concrete on the walls, floor and roof, as compared to drywall and wooden floors in Sydney).  OK, admittedly domestic construction materials are not the most interesting thing in the world to blog about, but are important from a producer’s perspective, as it makes a huge difference to the room acoustics, and hence how well you can hear what you’re working on.

In retrospect, the other big factor in my lack of progress was comfort.  Sitting at my usual desk and comparing, I realised that in Sydney…

  • Screens were too far away and too high… felt like they were ‘looking down on me’ as I tried to work
  • Not enough leg room under the desk
  • The chair wasn’t as comfortable

…granted these are small (somewhat ‘precious’) things in isolation, but together they made a big difference to the level of comfort, and hence I think my propensity to be creative.  It was just nice today to slip back into familiar and comfortable surrounds, and in the couple of hours I worked today, I did as much as I would have in a whole day last month.

It’s fairly obvious that a good monitoring environment is crucial to your ability to mix and produce well (as I’ve now re-proven to myself), but moreso I’ve learnt a lot about the importance of subtle physical comforts in a space, and how it can really help or hinder your creativity.

The First Listen of the Day

Recently I wrote about how our body is good at adjusting to make things we hear repeatedly, sound normal.  In the same way that as can cause a skewed frequency response to sound balanced, it can also desensitize us to things that don’t sound good.  For this reason, I find that the first time you listen to your work in the day, is really good for identifying things that maybe sounded right yesterday, but are actually wrong (in the cold light of day! 🙂 ).  For me this affects numerous elements of tracks, but the two most common are…

  • Instrument or percussion parts which sounded OK yesterday, but don’t fit, or just sound wrong on the first listen
  • After a mixing session where parts which seemed to fit properly yesterday, are way too loud

For that reason, I find that it’s critical to pay attention to your instincts during the first listen of the day.  Don’t be afraid to make seemingly excessive changes based on your ‘first listen’ instincts, even if you’re deviating far from decisions you made the previous day (I find this often happens during the mixing phase, where I agonised over differences of a fraction of a decibel the previous day, and end up cutting the whole part several dBs based on the first listen).

If you don’t act based on those instincts, it’s amazing how quickly those ‘wrong’ things start sounding right again, and as a consequence might end up sounding ‘wrong’ to a listener who’s hearing your track for the first time.

Finding Creative solutions to Mix Problems

Last month I wrote about how your ‘ear’ for identifying and fixing problems improves significantly when you dedicate yourself to producing full time.  Recently I had a situation which showed exactly this, and where my solution for fixing a problem was far different (and much more successful) than I would have come up with 9 months ago.

When I was writing ‘Cantana 1‘, I had come up with a patch for the main synth ‘stab’ sound…

The patch was made in V-Station using some FM between 2 of the oscillators, and I was fairly happy with the sound… thought that the FM gave a cool kind of gritty edginess to it.  But when it came to making the sound fit in the mix, it was really difficult to get it to properly stand out… it just seemed to get lost behind the other instruments and percussion.

I’d faced the same problem in the past (often with V-Station patches), and in those cases I’d often used large mid-range EQ boosts to try and correct the problem.  But this had also had limited success, often making the sound a bit ‘bloated’ and muddying up the mix.  When faced with this problem in the past, it could have quite possibly led me to abandon the sound altogether, just because I couldn’t get it to mix nicely.  I guess my thinking was along the lines of “it’s not fitting well, and I don’t know what else to do to fix it, so I’m just going to get rid of it”.

However, armed with the experience of the past year, plus the additional confidence that comes with that, I looked at the problem a bit more analytically…  The chord and the original patch I was using was quite low in terms of pitch, and as the FM was turned up quite high, there were a lot of ‘fizzy’ harmonics in the sound.  Hence, it seemed that the problem was a simple lack in mid-range frequency content… in the context of the track, the bass line and percussion were already supplying the low and high frequencies, and I needed this sound to ‘fill in the middle’, and provide the main theme.  But due to the patch and chord used, the mid-range was quite lacking… EQ would likely not have fully solved the problem too… you can’t EQ frequencies that aren’t in a sound to begin with.

In this case, I used a second instance of V-Station with a similar patch, but one with no FM and whose oscillators were much more centred around the mid-range.  It had a much cleaner and more rounded sound…

I fed both V-Station instances from the same MIDI track, and blended the V-Station audio outputs.  The result was as follows…

Whilst in isolation I actually prefer the original FM patch, the blended version was much easier to fit into the mix, and saved a lot of headaches trying to correct things with EQ (and potentially tedious automation of the EQ to adjust to the filter sweeps used on this instrument).

In retrospect, it was nice to see that I’d discovered more creative solutions to problems, and was able to analyze a problem to provide a solution, rather than giving up… my thinking was more along the lines of “there’s a problem here… now what’s causing it”, and this led to a preventative solution, rather than the corrective (and likely less successful) solution of messing with EQ.  It shows that (as mentioned in the previous post) your mixing and producing skills can really improve with dedicated and regular practice.