Hello… you’ve arrived at the first post of chromaticsabatic!
Hello… you’ve arrived at the first post of chromaticsabatic! The goal of the site is to provide practical ‘howto’-type advice on all aspects of writing and producing electronic music, backed up with practical, real-world examples from my own tracks. I hope to cover technical aspects like sequencing, mixing, and applying effects, as well as the more artistic aspects of the writing and production process.
I’ve been writing electronic music as a hobby since high school, and although I’ve always gotten a huge amount of enjoyment and satisfaction from producing my own stuff, trying to fit music in with full time work never allowed me to focus enough to achieve the quality of production I wanted. So, at the end of 2015, I left my day job to write music full time. Right now I’ve had just over 6 months of being a full-time musician and producer… it’s been a challenging experience, requiring dedication and perseverance, but at the same time hugely rewarding to be able to immerse myself in something I love doing. Through hard work and a lot of trial and error, I’ve learned a huge amount about the music production process… particularly with regard to mixing (EQ, compression, spatial effects), and final ‘polishing’ of a track to get it sounding as close as possible to commercial releases. As the year’s progressed, I’ve documented a lot of what I’ve learnt for personal reference, but being able to share this knowledge with a wider audience will make the whole experience more worthwhile. Hence I’ve started this site to give me a vehicle to share the techniques I’ve learned, and to continue to document things I discover during the rest of the time.
As far as music styles go, I try not to confine myself to specific sub-genres of electronic music, but what I’m writing at the moment sits somewhere between house, progressive, minimal, and techno styles. Samples of my music are available on my soundcloud profile.
My greatest satisfaction from this process, will be if readers can use the techniques to develop and improve their own music. I’m really pleased to be able to host this site, and I hope you can get a lot from it.
It’s not just a simple case of recording a pattern through the Volca’s direct output… the resulting sound is a mile away from what you hear through the speaker
I’ve blogged before about my fondness for the Korg Volca synthesisers, and also about Volca jam sessions. I love the simplicity and immediacy of the Volca synths, and through these periodic jam sessions, I’ve come up with 3 or 4 ideas which could easily be turned into decent full tracks.
The Volca Bass sounds awesome recorded directly through its headphone output, and I’ve already used Volca Bass sounds recorded this way in tracks (e.g. Dystopia). The Volca beats on the other hand presents some challenges. I’m currently trying to adjust to weaving music writing sessions into full-time work, and at the moment I’m working on turning the first of those 3 or 4 Volca jam session ideas into a full track. And this means taking the sounds from the Volca synths and recording or replicating them into Reaper (and Kontakt in the case of the Volca Beats).
The Volca Beats has some good and bad points. Internet forums are full of people complaining about the snare drum sound, and I agree it’s the low point of the synth (although by turning the decay right down it can become a pretty good rimshot). The cymbals also are pretty rough sounding… OK through the Volca’s speaker, but fairly grainy when recorded into a DAW. On the plus side, the 808-style bass drum sounds huge recorded through a DAW with longer decay settings, the clap sound is quite good, and the clave is very versatile. The problem I’m having at the moment is replicating the sound produced through the Volca Beats speaker in a higher quality context.
It’s not just a simple case of recording the pattern through the Volca’s direct output… the resulting sound is a mile away from what you hear through the speaker. There are a couple of reasons for this. This first is just the simple limitations of the speaker. I’ve mentioned this in a previous post but the Volca’s speaker is tiny, and trying to push heavy bass tones and sharp attacks through something less than 1″ diameter results in all kinds of inaccuracies. There’s nothing much at all below 350Hz (to be expected), and the pushing of all these strong tones through a speaker which is in no way capable of properly handling them results in all kinds of (ultimately pleasant) side-effects… distortion, compression, and a big chunk of excess mid range. The other, and more important problem that makes the speaker sound hard to replicate, is that the speaker significantly affects the decay of the drum sounds. One of the most interesting things I learnt during my ‘sabatic’ year producing, was how small changes in the length and decay of percussive sounds can have a huge effect on the overall groove/feel of a beat… comparing the same Volca Beats pattern through the direct output and the speaker is a great case-in-point of this. Basically the speaker tends to significantly shorten the decay of notes, so if you compose a pattern through the speaker, what you then record through the direct out ends up with longer decays which completely change the groove/feel of the pattern.
So, you’re left with the following situation…
You can’t really use the sound recorded through the direct out (at least on its own) as it sounds completely different to the speaker sound
You can’t use the speaker sound through a mic as it has no low end at all
The best you can do is to try and replicate the sounds you hear through the speaker in your DAW, and then try and rebuild the pattern from there (messing with decay and note lengths to try and replicate the groove/feel). I’ve had only mixed success with this so far… a lot of failures, and lots of persistence has been required. The best result I’ve had in replicating the tonality is to use a layered combination of…
The Volca speaker sound recorded with a mic (and type of mic seems to be a bit important here too… I’ve had more luck in capturing the ‘grittiness’ of the sound with an SM57 than my usual condenser mic).
Some similar synthesized drum sound to provide the low end, and bolster the mid
Potentially some blending with a similar sounding sample or 808/909 sample to give a bit more polish
… and in my last session I found that distortion is like a ‘secret sauce’ for getting a more present, compressed, and harmonically rich sound.
Trying to replicate these Volca Beats sounds and patterns has been a long struggle so far, but I need to keep pushing on. The ultimate reward will be that once I figure out the right way to do it, I can quickly materialise the 2-3 remaining jam session ideas I have into full tracks… will definitely post details of the techniques when I finally figure it out.
I’ve been meaning to post this for a couple of weeks, to catalogue a significant event in my journey as a full-time producer… what’s the event?? Well, after 15 months working on music production full time (plus another 4-5 months part time), I’ve returned to my standard (and regular paying!) work in IT.
I felt like it was appropriate to write a post to ’round off’ that part of my experience. Even when you’re doing something you love every day, it not always easy… remaining self-motivated, working alone and missing human interaction, and having to support myself with pretty much zero income, were all challenges I faced along the way. BUT, to be able to be immersed in something I have passion for and find so rewarding, is a once-in-a-lifetime opportunity. Although the time I can spend producing will be more limited going forward, the skills I’ve developed over the last 18 months will allow me to see tracks through from inception to completion way more quickly than I could have done before. Plus there’s the satisfaction and confidence from knowing that I can make tracks that from a production perspective, are on par with stuff released on labels… that was one of my main goals of he whole process, and massively satisfying to have achieved. Overall, I have no regrets about taking the risk of leaving well paid work to follow my dreams, and in the absence of the unfortunate financial constraints (that one about needing money for food, shelter, etc… 🙂 ), I would keep producing full time.
To any readers who are thinking of leaving or taking a break from work to follow a creative pursuit, I’d really encourage you to take a chance, and try to make it happen. I was surprised that a lot of the creative skills and thought processes I honed during 2016, can be equally beneficial when applied to my usual work in software development… growth in my profession didn’t halt even though I was working in a very different discipline.
Although technically the ‘sabatic’ part of this blog is over for now, I will definitely continue to write and produce music in my spare time, and want to keep posting any interesting aspects of music production I discover going forward.
And, just a final note is to say ‘thankyou’ to the readers who have been following my journey over the last 18 months… It’s been exciting to see visitors from corners of the world I’ve never been to, and to likewise read about the fantastic artistic pursuits you guys are involved in. I hope you found something beneficial in the things I documented here, and for your readership and support, my sincere thanks.
I touched on anticipation in a previous post… i.e. the idea of adjusting aspects of your track early in the production phase, in anticipation of the effect of subsequent phases and processes. I hadn’t really considered this concept much until I wrote that post, but it’s recently got me thinking more about anticipation, and where else I could use it to improve my tracks and production process.
My tracks are becoming too complex. My recent Cantana tracks are a case in point. I wrote both of them with the intention of adopting a kind of ‘back to basics’ / ‘kiss‘ type of approach… i.e. minimal style, fairly sparse instrumentation with few sound layers… the idea being to see how quickly and simply I could put a reasonable-sounding track together using the skills I honed over 2016. Ironically, they turned out to be the exact opposite… among the most time consuming, and complex (in terms of production and process, if not sound) tracks I’d ever written. Reflecting on this, I think part of my problem is not anticipating things properly… Cantana 2 in particular ended up with around 5-6 layers of low-level background instrumental and percussion sounds, intended to add some depth and complexity to the sound. I added all these layers in the very early stages of writing before I’d added any spatial effects like reverb and delay. The problem was that when I did come to add reverbs and delays towards the end of the mix process, doing so just made the mix and general sound too full. There were already so many of these small parts and sounds in the background of the track, that were was not enough room for them to co-exist with the reverbs and delays. To fix it I basically ended up just pushing down the level of all these small parts in the mix… to the point that I wonder whether removing them completely would have sounded any different.
My mistake was not anticipating the effect of reverb and delays at the early stage. I think I was so caught up in worrying that the mix sounded too thin, that I just kept adding more and more of these small layers, not realising that a lot of the space they were consuming would have been consumed by spatial effects anyway. And it caused me headaches in the long run, because as a result of all these layers, the track turned out very dense both in terms of frequency and dynamic, making it more difficult to mix (i.e. compress, EQ, and level, and to get enough separation between the different layers).
Ofcourse another potential remedy to the problem is to actually build in the reverbs and delays right from the start. I expect this is the approach a lot of other producers would take. For me, for some reason I do like having some distinction between ‘phases’ of the production process… I think also because I’m working on an older PC, and there’s a benefit to consolidating and sharing CPU-hungry effects (especially reverb) between multiple sound elements (which is easier to do in a single stage).
Going forward I’m going to make a conscious effort to allow the track to sound thinner and more sparse than I think it should, in the early stages of coming up with ideas… anticipating that these gaps will be filled by the addition if things like reverb and delay later in the process. In fact to help enforce that idea I’m going to try and compose the track 95% using only Korg Volcas, which implicitly limits the number of layers I can create.
Hopefully that will allow me to finally move towards tracks which are simpler, and quicker to finish.
Isolating and bringing out individual parts when mixing, and improving the overall clarity of a track can be challenging as an amateur producer. It’s easy to mistakenly believe that there is a single ‘magic’ solution, that through lack of experience you don’t know about. The reality is that magic solutions rarely exist, and achieving improved mix clarity is usually the result of a series of small changes, which sound insignificant in isolation, but combine to make to fairly major change to the mix.
Cantana 2 was the first track I’d written which used an acoustic sound (i.e. piano) for its main theme. This presented some new challenges in terms of getting a fairly dynamic acoustic sound to sufficiently stand out over other parts. In this post I’m going to go through a series of small changes I used which helped me to get the piano sitting much more prominently in the mix, to separate it from the pad sound in the track, and to improve the overall clarity of the mix.
The starting point is a clip from the very early stages of sequencing and mixing the track. At this stage there was little delay, and no reverb across mix (hence the fairly raw sound compared to the final version on soundcloud)…
The first step to try to bring out the piano was to apply a compressor to it. I used Waves Renaissance Axx with the following settings…
…which evened out the general level of the piano and made it a little more ‘intelligible’ (apologies for the loss of one channel of the pad during the first part of the clip)…
Compression on piano
Next I applied EQ to both the piano and pad sounds, using the following curves. Notice that the 2 curves are complimentary, in that they accentuate different frequency ranges in each sound…
EQ on piano and pad
Next I used Voxengo MSED to slightly reduce the sides component of both sounds. Often to separate sounds you can use opposing settings on each (i.e. one wider and one narrower, to separate them). In this case I felt that both the piano and pad were a bit too wide, and were getting lost against the bass and drums, and the pad especially was dropping too much level when the track was monoed. I reduced the sides component of the pad and piano by 2.6dB and 2dB respectively…
Reduced sides component on piano and pad
I felt like there were still too much ‘mud’ in the mix, and a big contributor to this was that both these main sounds were competing in the low/mid range. High pass filtering the piano made it sound a bit synthetic and unnatural, so I added a high pass filter at around 400Hz to the existing EQ curve on the pad…
High-pass filter on pad
Using compression sidechained by the bass drum on instrument sounds has been a well used technique in electronic styles for a while. In this case I used Noisebud’s ‘Lazy Kenneth’ to simulate the effect of sidechained compression on the pad, to make a bit more general ‘space’ for the other sounds…
(Simulated) sidechained compression on pad
I was still not happy with the clarity of the pad sound. When creating and auditioning it in isolation I’d used a low-pass filter with quite a lot of resonance. This sounded good on it’s own, but was not sitting well in the mix. I was one of the filter modules in Kontakt, and I reduced the resonance amount from 46 to 31% (and made a similar, proportional change in places where the resonance was automated)…
Reduced pad filter resonance
This final step in this series of changes was to try and further separate the pad and piano by using volume automation to drop the pad level by 1dB whenever the piano was playing…
Volume automation on pad
Ultimately I used further tweaks and processing after this to arrive at the final mix, but this series of steps shows the main changes I made to try and separate out the pad and piano. Listening to the first and the last clip, there’s a significant difference in the overall clarity of the mix (and even moreso comparing the first clip to the final mix on soundcloud).
Hopefully this gives some insights and ideas on techniques to improve your mixes, and demonstrates that usually it’s the sumtotal of multiple subtle changes that give an overall significant difference in the clarity and quality of a mix.
Back in October last year I wrote a post about bus compression, and what at that time was my default effects chain for master bus compression. Some time’s passed since then, and my standard master bus effects chain has evolved further, and using this chain and new techniques I’m now able get pretty significant level increases (5-6dB) whilst still maintaining a reasonable transparency and general mix clarity.
The biggest change to the setup is the final limiter plugin. Previously I was using Waves L1 for this, and whilst it’s a useful tool, and definitely very good for transparent limiting on individual sounds, it’s also getting pretty old (not sure exactly when it was released, but it was more than 12 years ago), and I find that across a whole mix, it tends to add artefact, and lose some transparency when more than around 3-4dB of gain reduction is applied. After looking at a couple of different options as a replacement, I bought the T-RackS Stealth Limiter after reading a couple of favourable reviews. I have absolutely no regrets about this… the amount of gain reduction it can provide without any adverse artefact is quite amazing (this was one of the reviews that convinced me incase you’re interested).
I’ll go through the exact effect chain and settings I used on Cantana 2. In the previous post I discussed using Waves L1 as the first step in the chain, to even out transients and allow subsequent compressors to work more easily… and this aspect hasn’t changed. In Cantana 2, I used threshold of -3.5dB, with the fastest release setting, just to catch and even out the really fast peaks…
Another significant change is that I now tend to use two instances of Cytomic’s ‘The Glue’ in series. The main difference between these two instances is in the attack and release settings… the first tends to use quicker settings in order to further (but more gently) even out peaks, whereas the second uses slower attack and release to provide a smoother, more general gain reduction.
For the last couple of tracks, I’ve used parallel compression in the first Glue instance… using a high ratio and low threshold to really squash the sound, and then using the dry/wet control to blend it back with the original signal. The settings used for Cantana 2 are shown below…
One thing I’ve found with parallel compressing in this way, is that it’s easy to either compress the wet signal too much, or blend too much of it back, and adversely change the level balance of different instruments in the mix. I had this problem with Cantana 2 initially, where the snare/clap sound in the original mix was still quite dynamic and peaky… this meant that the low threshold / high ratio settings tending to really squash the snare and introduce ‘unmusical’ pumping. When blended back with the original signal, the net effect was that the level of the snare dropped in the mix (sounded very similar to dropping the snare level by 1-2dB in the original). I fixed this by going back to the original mix project, and adding a bit more compression to just the snare track… this reduced the peakiness, and allowed The Glue to compress the whole mix more smoothly. Still I was surprised that the difference between what I considered the ‘right’ setting for the threshold, and having too much compression was only roughly a couple of dB, as the two clips below show (in these, the first Glue instance is set 100% wet for demonstration… 12% of this was mixed back with the original signal in the final settings)…
Threshold: -19.8dB, Makeup gain increased to level-match. Notice the drop in level/clarity of the clap/snare, and pumping effect on the piano part.
The second instance of The Glue used a lower ratio, and slower attack and release (release set to the ‘Auto’ setting). This created a kind-of ‘continual’, general compression over the mix, to give a couple of extra dB of gain reduction…
One interesting comparison between the instances of The Glue, was the movement of the virtual ‘compression’ needle in the UI. The needle in the first tended to move quite quickly in response to the dynamics and rhythm of the track, whereas the second tended to stay around the 3-4dB mark with little movement. Before buying The Glue I hadn’t used a hardware compressor (nor plugin) which had a needle to show the amount of gain reduction… but I’ve found it’s a really useful supplement to know what the compressor is doing, and understand whether it’s imparting the effect you want.
The final link in the chain is the T-RackS Stealth Limiter, and for Cantana 2, I used the following settings…
This was quite a lot of limiting, and to be honest more than I would like to use, but necessary to be competitive with other tracks in the same style. The nice thing was that the progressive application of compression through the whole effect chain meant that I could use such aggressive settings in the Stealth Limiter whilst still maintaining reasonable clarity and transparency.
I find that applying compression like this to the master bus can sometimes cause a loss of high end, and in the case of Cantana 2 I used an EQ with a very slight high shelf boost to compensate for this (placed before the Stealth Limiter)…
Anticipating the Effects of Compression
I touched on this briefly in my last post… obviously introducing compression (and especially significant amounts of compression) is going to alter the level balance of different elements in a track… and I’ve found it can be beneficial to anticipate these changes and compensate for them in your original mix accordingly. The case I discussed in the last post related to reverb… reducing the dynamic range of the sound brings the level of quieter parts (like reverb effects) closer to the main sounds in a track, so in Cantana 2 I dropped the level of the overall reverb sends by a couple of dB in the original mix, and then re-rendered it for bus compression. I did a similar thing for some of the low-level background/atmospheric incidental and percussive sounds in the track too… without dropping the level to compensate for the compression, the final compressed mix turned out a bit ‘muddier’ than the original. Another useful tip is to render small, key parts of the original track (rather than the entire track) when auditioning these level changes. I’m using a fairly old PC, so the mix project (with stacks of plugins and including multiple CPU-intensive reverbs) only renders in slightly better than realtime… it’s much more efficient to try dropping the levels of the quiet parts by a certain amount, and then just rendering short clips of the key sections of the track. These short clips can then be imported into the bus compression project, and the changes auditioned without having to wait for the entire track to render.
The net result of this new approach is I’m able get more competitive levels, and still maintain a more clean/transparent mix than before. It again reiterates my belief in a progressive/layered approach to compression that I discussed in my first bus compression post. The less work a compressor has to do, the more easily and transparently it can do it, so using multiple, staged applications of compression for different specific purposes seems to make sense. Following this idea over the last 12 months has also made me have a greater consciousness of compression and evenness of levels during the writing and mixing phases of a track… so my mixdowns prior to bus compression tend to have a lot smoother and more even levels to begin with. You can see this by just visually comparing the pre-bus compression render of an earlier track (The Yellow Room) against Cantana 2…
If you’re writing in a style where competitive levels are important, the more you even out the levels in the early stages of writing and mixing, and the more progressive approach you take to master bus compression, the more easily your final limiter will be able to get to the required competitive level.
I spent a few hours yesterday doing final bus compression for the track I’m currently working on. Approaches to and techniques for bus compression were one of the things I learnt most about during 2016, and yesterday I had a kind-of ‘lightbulb’ moment, which will hopefully lead to better results in this area going forward.
I’m a ‘reluctant participant’ in the whole competitive levels/loudness wars thing. Fundamentally I like the groove, emotion, impact, etc which a decent dynamic range can impart on a track. But at the same time I understand the need to achieve an overall loudness level that’s similar to other tracks in the same genre (especially because not doing so simply makes your music difficult for DJs to mix).
In the past, I’d always equated greater amounts of bus compression to a loss in clarity. To some extent this is true, as compression will narrow the dynamic range of the sound and hence simply reduce the ‘depth’ of volume variation available. However I’d always found that compressing the entire mix necessitated a compromise of getting closer to competitive levels while sacrificing some detail and clarity.
About halfway through last year I had a mini breakthrough of sorts, when I realised certain settings on bus compressor plugins can have a big effect on the quality of the resulting audio. Specifically I usually use Cytomic’s ‘The Glue’ as the first stage in the bus compression chain, and I found that simply setting the oversampling rate to the recommended or higher levels (4x or more when auditioning) gave far clearer audio quality than the default lower settings.
For my current track I had spent a bit longer than usual honing the reverb plugin settings, and fine tuning the reverb send levels. After this I was really happy with the result… it had a nice balance of having a good depth/space with sounding too ‘washed out’, and seemed to translate well to several different sets of speakers and headphones. But yesterday it was a bit disappointing to have some of this clarity and balance lost when I started pushing the final mix through bus compression. When I listened closely it wasn’t so much a by-product of compression, but more that the levels of the reverbs and delay effects were stronger. When I thought about it, the reasoning was obvious… I’d squashed down the top 3-6 dB of the volume range, so obviously sounds down at -15 to -20dB (like the reverb layer) had been effectively pushed up by a similar amount.
I usually do final bus compression in a separate Reaper project to the mixing, using just the final stereo mixdown as a source track (my aging PC can’t handle multiple reverb plugins and CPU hungry bus compression at the same time). So I went back to the mix project and rendered another version of the stereo mix with reverbs and main delays turned down around 1.5dB. Running this new version through the same compression chain resulted in a much clearer mix… it sounded a lot more like the former original stereo mixdown… just louder (which is exactly what I was trying to achieve).
Anyway, in hindsight I’m a bit surprised it’s taken me this long to figure out this technique (the basic point of compression after all is to reduce dynamic range), but I’m going to experiment a bit more, and hopefully end up with a lot cleaner, clearer final mix than for past tracks.
Another way to potentially prevent the issue could be to ‘mix into’ a compressor or limiter during writing/sequencing/mixing. This is a bit unorthodox technique historically, but seems to have gained popularity in the last few years (I seem to have read a lot of articles recently where people discuss working this way). The idea is to put a limiter/compressor on the master bus right from the early stages of writing (using generic/default settings close to what you’d usually use for final bus compression). This way you’re always evaluating level balance with compression already ‘baked in’. I don’t usually use this technique because for some reason I like to keep a clear separation between the mixing and final ‘mastering’ stages… but based on yesterday’s experience I can definitely see the merits, so may try it in a future track.
I had an interesting experience over the last couple of weeks, with a mixing problem that should have have been obvious and easy to fix, but because I was too focused on details, I missed the bigger picture and let the problem persist for way longer than it should have.
I’m still in the finishing off stage of a track which has ended up becoming the most drawn out and time consuming piece I’ve worked on so far. I just looked back to previous posts and realised I said I was on the ‘home straight’ with it more than 2 months ago.
Part of the reason this track took longer than others was that it was the first where I’d used an acoustic instrument for one of the main themes… an acoustic piano riff (from NI’s ‘New York Grand’). As with acoustic percussion samples I’ve discussed in a previous post, any recorded acoustic instrument is inherently going to have a much greater dynamic range than synthetic sound. And to fit this into the generally very narrow dynamic of club music, considerable but careful application of compression is required.
The piano riff I came up with, I thought, had a nice dynamic… getting thicker in texture and a bit louder/stronger towards the end of the riff… I felt this gave it a bit greater feeling of tension. Although a fair amount of compression would be required to make the riff fit well in the mix, I was keen to try and preserve as much of that dynamic as possible. Hence when mixing I was too focused on trying to preserve dynamic of the riff that I’d liked in the soloed part. This unfortunately led me to being too cautious in applying compression, and ended up pushing the piano part way too high in the mix (in order to get it to stand out properly). Added to this was the mistake of not following my own advice and regularly checking back against reference tracks, so when I finally did do a side-by-side comparison with my usual reference material I’d created a kind of ‘inverted smile’ in terms of frequency spread… with piano and mid-range way too dominant, and not nearly enough bassline nor cymbals.
Once I figured out my mistake, it was pretty easily corrected with a simple application of Waves’ Renaissance Axx compressor (after having spent at least a week going in the wrong direction)… sure I had to sacrifice some of the nice dynamic I had originally wanted to highlight, but looking back, I think that original desire was misguided. The track I’m writing is in a minimal-techno style… where narrow dynamic and very loud overall track levels are commonplace… the expectation to keep a main acoustic instrument part fairly dynamic, and achieve a competitive level in the overall track was a bit unrealistic.
So 3 important lessons I learned for going forward…
Audition parts in the context of a mix. Things that sound good on a soloed part may no longer sound so good, or even be completely lost in the context of a whole mix. I was too swayed by trying to work towards a soloed piano sound which I thought sounded good… it would have been better to have always auditioned it in the context of the mix right from the start.
Be realistic about how much dynamic range you can achieve in styles which are innately highly compressed.
Listen to and compare to your reference tracks regularly!