Noise Reducing Percussion Samples

A quick tech ‘how to’ post today… around noise reduction in live-recorded samples.  As I’ve mentioned previously, I use a lot of live recorded sounds in my tracks, especially live recorded percussive sounds.  Sometimes these sounds can be recorded quietly in the studio, but other times I capture them ‘on location’, and hence have to work with background noise.  On other occasions the background noise is unavoidably entwined with the sound source.  This was the case today when I recorded some tom sounds from the Volca Beats speaker.  The direct sound from this Volca is fairly useable , but the speaker is really hissy, and hence I ended up with a lot of hiss in the sample…

The raw sample

In the past I used to try and remove this in one of two ways…

  1. Use a high shelf filter to reduce the hiss
  2. Use a gate to fade out the tail of the sample

…but neither of these were ideal… the filter option would remove high frequency content from the whole sample including the attack part (which can significantly alter the sound).  The gate avoids that problem, but requires that you find a trade off between cutting the low frequency content of the tail (using a short release time), and ending up with audible hiss remaining in the tail (using a longer release time).

But, using automation you can combine the above approaches and get a much better result than either in isolation.  The trick is to use a high shelf filter, but automate the gain/level control, so that it’s very quickly attenuated just after the attack of the sound is finished.  The screens below demonstrate the setup in Reaper.  First you import the sample into an empty track.  Then add a high shelf filter into the FX chain (I’m using Reaper’s built-in ‘ReaEQ’ below to keep things simple).  Then automate the gain/level control of the filter (using the ‘Track Envelopes/Automation’ button on the track control)…

noise-reducing-percussion-samples-1
Reaper track ‘Envelopes/Automation’ window

Then draw an automation curve as shown in the below screen…

noise-reducing-percussion-samples-2
Automation curve
noise-reducing-percussion-samples-3
ReaEQ settings (‘Gain’ is controlled by the automation)

Depending on the nature of the sample, you’ll want to try adjusting the 4 highlighted parameters above to get the noise-reduced version sounding right…

  • The point where the filter starts to drop
  • The time the filter takes to get to minimum gain, and the shape of the curve (above option is using the Reaper ‘fast start’ point shape)
  • The frequency and bandwidth/Q of the filter

If it’s an excessively noisy sample, a low pass filter might also work better than high shelf.

In this case, the same sample with the above settings turned out like this…

The ‘noise-reduced’ sample

… that’s a considerable amount of noise reduction, but has maintained all the attack and general timbre of the sound.

Breaking Musical Rules 1

In my recent post on questioning ‘Does Knowing Musical Theory Help Production?‘, I said I’d give a few examples of where I found a musical structure that was outside the rules of traditional Western music theory, but sounded good none the less… so here’s the first example…

My most recent track on soundcloud ‘Cantana 1‘, has a bassline whose pitch rolls around a lot through portamento… but is centred around an A note… hence you could say the track is in the key of A.  But… the main synth ‘stab’ and vocal pad sounds are based around a B flat minor chord.  That’s a semitone away from the key of the track, and is about as far detached as you can get from ‘correct’ structure and harmony according to the rules of classical Western music theory I learnt from the AMEB.  With this semitone interval the track sounds like this (as per soundcloud)…

…if I was to pitch the stab and pad sounds down a semitone to match the key of the bassline, it would sound like this…

Interesting huh?  It’s subjective, but although the second clip does sound more ‘correct’ in terms of harmony, the odd interval in the original version gives it a more dark, and unresolved sound… and to me, ultimately makes a better track.

Looking back, I’m a bit surprised I discovered using a bassline and chord separated by a semitone at all.  When I’m putting together the various layers of a track, I’m usually implicitly aware of what key the track is in, and that leads me towards preconceived ideas of what harmonies will work, and what won’t (these kind-of ‘burned in constraints’ I mentioned in the previous post).  Given that traditional theory would say that a tonic and tonic + 1 semitone interval would not work, I’m surprised I even experimented with that combination in the first place.  I can only guess I had adopted a kind of ‘hit random chords’ approach to finding new parts, and just happened to stumble on this semitone part that happened to work well.

Anyway, the takeaway is to try and keep an open mind when you’re coming up with new parts and ideas.  Use any knowledge of music theory you have to help expedite the process, but don’t get caught up in letting that knowledge restrict your ability to discover things.

I’ll post more examples soon.

The ‘Rule of 3s’ for Incidentals

A month or so ago, I read an article on musicradar article entitled ‘Robin Schulz’s top 10 tips for producers‘.  I hadn’t heard of Robin until that point, but I really resonated with the advice that he was giving… the stuff he covered was generally also the stuff I tend to think of as key techniques to producing successfully.  I checked out one of his tracks on YouTube too… ‘Prayer In C‘.  The track has an incidental build almost right at the start (0:05)… but surprising for commercial music, the texture of this build is quite thin and sparse… consisting of mainly just a lone white noise sweep, and tends to come in a bit predictably.  It’s similar to the kind of use of this sound that you find in more amateur, ‘bedroom’-type productions.  I’m not at all trying to be critical of the production (I wouldn’t have a leg to stand on as obviously Robin is at least 1000x more famous than me!)… but it’s interesting to observe, because it does stand-out somewhat from most highly polished and produced tracks from big names.

It got me thinking about things that distinguish ‘bedroom’ sounding productions from those from big names on big labels… and one of the major differences from my perspective is the use and depth of incidental sounds.  My general impression is that highly ‘professional’ sounding tracks tend to have multiple layers of complexly woven and sculpted incidental sounds… the kind of thing that adds a subtle but critical sheen of additional depth and detail to a track.  A really good example of this that comes to mind is Sasha’s ‘Invol2ver’ album.  The interesting thing about these types of incidentals is that you don’t usually explicitly hear them when listening to a track… but if they’re removed, suddenly something major is missing and the track sounds much less polished and professional.

Along these lines, for all the tracks I worked on during 2016, I adopted an approach with incidental sounds which I’ve since come to refer to as ‘the rule of 3s’.  That is… for any major build or transition point in a track, I try to have at least 3 separate layers of incidental sounds happening at the one time.  The reason for this… having just 1 or 2 layers of incidentals at such points seems to end up being too obvious… the listener can distinguish each of the layers and the build becomes somewhat predictable.  But for me, 3 layers is the sweet spot where suddenly the layers of incidental, along with whatever instrument sounds are in the track itself, combine to make it difficult for the listener to be conscious of all the layers individually… the sound becomes harder to predict and hence more interesting.

So based on this thinking, I try to make sure I use at least 3 layers of incidental sound at any major build or transition in a track.  You have to temper that according to the style aswell… progressive-type tracks tend to do well with more layers of incidental than harder, more minimal styles… but I think 3 layers is a good baseline to follow.  As a typical default, I would have those 3 layers consist of…

  • A longer white noise swell-type sound
  • A shorter swell (e.g. reversed cymbal)
  • Some type of percussion, possibly through a delay effect

…and make sure that each layer has individual panning to control the side to side depth, aswell as EQ automation to control the front to back.

As an example, the clip below contains the soloed incidental parts from the build starting at 2:43 in Summer Wave

This actually contains about 5 layers on the build/swell side (3 swell-type sounds plus 2 percussive), and 2 or 3 crash cymbal-type sounds layered together… that’s leaning towards being a bit excessive, but also gives the track a lot of depth, and that more ‘professional’ sound I mentioned earlier (and given the more progressive style it lends itself to greater depth of incidental sounds).

If you’re a producer, striving to make your tracks sound more professional or polished, I’d highly recommend you look at your use of incidental sounds… and if you’re only using a couple of layers consider thickening the texture and apply the ‘rule of 3s’.

 

(Disclaimer: A acknowledge that these days the term ‘bedroom productions’ has no correlation with being amateur or unprofessional… as many famous commercial productions are indeed conceived and realized in a bedroom!)

Does Knowing Musical Theory Help Production?

I watch a lot of the Fact TV ‘Against the Clock’ series.  It’s a nice way to look at how other producers do things, and sometimes get some new ideas that can help your own approach and work.  One interesting observation from these videos is that there’s a fair variation between the formal practical and theoretical training that producers possess… I.e. you see some guys who seem to go for the ‘lots of random notes until something sounds OK’ approach on an Abelton Push-type device, and on the other hand, guys who get behind a keyboard and start dropping improvised parts like a jazz session musician.  I’m not making an elitist-type judgement here either… there’s not necessarily a correlation between the quality of the track ultimately produced, and the instrumental skills of the producer.  But it’s something that got me thinking, and something that I was aware of in my previous year of full time music production.

I always learnt instruments and formal music theory from a fairly early age… first through the AMEB and then through school and high school, and while I’ve been very grateful for that knowledge and how it can often help and expedite my music production, like other elements I’ve written about, it can sometimes be a double edged sword.  On the plus side, the benefits I see are that…

  • When trying to come up with new ideas for tracks and parts, an understanding of scales and their relationships can help you to more quickly come to potential parts that fit nicely with whatever you’ve already got.  I think without that understanding, you’d have to cycle through things a lot more randomly (like just trying every key in an octave until something sounds good).
  • I think it’s easier and more quick to translate ideas you hear in your head into a tangible sound, project, score, etc…

But, at the same time, there are lots of ideas that don’t fit into the formal bounds of music theory that can still sound interesting and/or good… and I feel like the problem is sometimes, that having those theoretical constraints ‘burned in’ to your thinking can stop you from accessing and finding these “don’t fit” ideas.

There’ve been several times over the last year, where I surprised myself by finding a sound, interval, or harmony which was a bit outside the boundaries of Western musical theory, but sounded good nonetheless in the context of the track I was working on.

I’ll try and go into detail on a couple of these over the coming weeks.

Pitching Percussion

As I’ve written about many times in this blog, I learnt a stack of stuff during 2016 regarding production techniques.  One of those that really stood out for me, was the importance of correctly pitching percussive sounds.

For whatever reason, prior to 2016, I can’t consciously remember explicitly re-pitching sounds like hi-hats, snares, claps, etc… which says to me that I either didn’t do it that much, or didn’t see it as being particularly important.  I guess from a theoretical point of view, my thinking was along the lines of “they’re not tonal, harmonic sounds, so there’s no point or need to tune them”.  Ofcourse, now understanding what constitutes a sound much better, percussive sounds are no different to what we consider ‘instrument’ sounds… at the end of the day they both break down to a collection of sine waves… it’s just the sine waves in a percussive sound modulate much more quickly, and are often at more ‘dissonant’ intervals than those in instrument sounds.  For lower pitched, more ‘droney’ instruments like bass drums and toms, it’s pretty obvious that changing the pitch can have a significant effect (since the tail of these sounds is usually dominated by a single sine-ish tone), but I was suprised how much of an effect this can have on snares and cymbals.

Rather than bang on too much about sound theory, maybe it’s best to illustrate with an example.  Below is a raw clip of some of the percussion elements, and the bassline of my track ‘Dystopia‘…

… and here is the same clip again, but with the pitch of all percussion set to the default (i.e. as it was in the original samples)…

Hear the difference?  The pitched version sounds much more cohesive, and has a better groove… mostly due to the hi-hat and clap/snare sounds being more in tune with the bass line and bass drum.  Also there’s a high pitched  ‘woody’ sound played on 16ths (sample of a chopstick being dropped onto a pile of chopsticks)… it tends to stand out and sound incongruous in the unpitched version.  Overall, the unpitched version just seems to ‘lag’ somewhat, and definitely doesn’t have the same integrity and groove of the pitched version.

The actual differences in pitch are usually quite slight.  I use Kontakt for all of this percussion, and these samples would be shifted by at most 2 semitones.  But I find even a change of 0.3 or 0.4 of a semitone, on a key percussion element like hi-hat or snare, can have a profound effect on the overall sound and groove of a track.

It’s important to keep this in mind too, when auditioning percussion samples.  I tend to cycle through sometimes 100’s of snare and cymbal sounds, and when doing this you have to keep in mind that a particular sound might sound slightly off or wrong when directly auditioned, but could be transformed after a slight pitch shift.

Anyway, the importance of pitching percussion been a kind of interesting revelation for me, and in terms of potential effort vs benefit (i.e. it requires little effort to change for potentially a lot of benefit), is a technique you should definitely utilize.

Mix Issues – Prevention Rather than Cure

Thanks to the rapid development of DAWs and plug-ins over the last 5-10 years, as producers we have close to unlimited flexibility in terms of audio processing.  Even my very old (7+ years) music PC is capable of running 10’s to 100’s of simultaneous plugins in a track’s project.  Added to this, the internal digital routing in a DAW, and the ever-increasing quality of plugins, means chains of 10’s of plugins are not only a reality but often the norm in putting together a track.

But with this flexibility can come a complacence to ‘fix problems later’ with plugins, rather than dealing with them at the source.  I’ve read numerous interviews with pro producers who emphasise the importance of getting sound right in the first instance… particularly with things like tracking… finding good sound through good mic selection and placement, rather than fixing it with EQ in the mix.  Yet, it can be easy to forget or ignore this advice given how simple it is to throw extra plugins in an effect chain.

While writing ‘Dystopia‘, I came into this kind of situation… a problem which could have been fixed by additional tweaking, or extra layers of compression… but which actually had a simple, and probably ultimately better sounding solution at the source.

The track has the following background percussion pattern in various sections…

Within an 8 beat phrase, the first percussion ‘hit’ occurs on the third 16th beat, and has a quieter and lower pitched ‘grace note’ a 16th before that.  The below screen shot shows the MIDI sequence for the pattern, with the grace notes highlighted…

mix-issues-prevention-rather-than-cure-1

At the point of final mixdown and applying bus compression, I noticed that there were occasional waveform spikes at the points of these grace notes… the highlighted peaks on the below waveform show an example…

mix-issues-prevention-rather-than-cure-2

These spikes were not only quite strong (almost hitting 0dB), but occurred on a rhythmically odd (syncopated) beat of the bar… i.e. the second 8th beat of the bar… at the same point as the offbeat hi-hat sound.  When I was trying to apply compression, the strength and syncopation of these spikes were causing the same type of uneven, pumping compression I mentioned in my second bus compression article.  The problem could have been cured at the final mix stage by potentially applying a limiter or a fast acting compressor at the start of the effect chain.  But instead, I went back to the MIDI sequencing and took at look at the part itself.  Considering the note at the second 8th beat was just a grace note, and that it occurred on the same beat as a rhythmically far more important part (i.e. the offbeat hi-hat), the MIDI velocity of that note seemed quite high (at around 81).  Hence, I tried simply reducing the velocity of the grace note to about 70 as per the below screen shot…

mix-issues-prevention-rather-than-cure-3

…and this simple change benefited the mix in 3 ways…

  • It left more room for the offbeat hi-hat, and hence made the hi-hat clearer.
  • It wasn’t in any way detrimental to the in-context sound of the percussion part (actually, I think it sounded better after the change).
  • It had the effect of removing those waveform peaks, and hence let the compressor work more smoothly and musically (see the ‘after’ waveform below)…

mix-issues-prevention-rather-than-cure-4

Ultimately, a simple changing of MIDI velocity fixed the problem, and was far easier to implement than extra layers of limiting and compression would have been (and also avoided the additional side-effects that limiting and compression could have introduced).

Clips of the ‘before’ and ‘after’ full mix are below…

Before

After

The interesting take-home from this experience, was to always think a bit ‘out of the box’ with regard to mix problems… to consider whether there’s a simple preventative measure that could avoid or correct the problem in the first instance.  In 99% of cases, as the pro producers advise, such a prevention is probably going to be easier and more effective than the equivalent cure.

Getting Comfortable With Your Environment 2

I’ve learnt a lot about the importance of subtle physical comforts in a space

I arrived back in Tokyo last week, and had my first day back into writing today, after about a two-week break over the new year.  As I wrote about over the last few posts, for whatever I reason, I wasn’t 100% settled working in Sydney this time, and although I came up with a couple of good ideas, I didn’t progress through with them as far as I would have liked.  It’s a bit strange, because it was the second period I had working in Sydney in 2016, and the first one was actually quite fruitful and productive.

However, getting back home and working in the place I’ve become accustomed to over the last year, it became more clear why I wasn’t so productive in Sydney this time… it broke down to 2 basic things… sound and comfort…

Sound, because I realised that I’ve really grown to know and trust the sound from my monitors and studio room in Tokyo.  After a year of working on here every day, I just know how the sound will translate to the final mix, and after having mixed a number of tracks that I’ve been happy with, it just boils down to confidence, and the resulting speed with which you can make tonal changes and mix decisions.  I just didn’t have the same confidence in Sydney… I knew there were a lot of parts that I couldn’t judge properly, and either kept changing them back and forth, or knew that I would have to fix them when I got home… and this led to everything taking longer, and a reduction in the ability to commit to a part and then move onto the next stage.  The room sound was probably a big contributor this too…  I blogged before about the uneven bass response in Sydney, and aswell I noticed on returning, as soon as I first walked into my apartment, just how much lower the ambient noise is here… likely it’s a lot to do with the construction (i.e. my apartment here is solid concrete on the walls, floor and roof, as compared to drywall and wooden floors in Sydney).  OK, admittedly domestic construction materials are not the most interesting thing in the world to blog about, but are important from a producer’s perspective, as it makes a huge difference to the room acoustics, and hence how well you can hear what you’re working on.

In retrospect, the other big factor in my lack of progress was comfort.  Sitting at my usual desk and comparing, I realised that in Sydney…

  • Screens were too far away and too high… felt like they were ‘looking down on me’ as I tried to work
  • Not enough leg room under the desk
  • The chair wasn’t as comfortable

…granted these are small (somewhat ‘precious’) things in isolation, but together they made a big difference to the level of comfort, and hence I think my propensity to be creative.  It was just nice today to slip back into familiar and comfortable surrounds, and in the couple of hours I worked today, I did as much as I would have in a whole day last month.

It’s fairly obvious that a good monitoring environment is crucial to your ability to mix and produce well (as I’ve now re-proven to myself), but moreso I’ve learnt a lot about the importance of subtle physical comforts in a space, and how it can really help or hinder your creativity.