Tuesday 13 December 2022

THE FORGOTTEN ART OF MUSCLE MEMORY IN AUDIO ENGINEERING - PART I





Muscle Memory? What even is that? According to Wikipedia — “Muscle memory is a form of procedural memory that involves consolidating a specific motor task into memory through repetition, which has been used synonymously with motor learning. When a movement is repeated over time, the brain creates a long-term muscle memory for that task, eventually allowing it to be performed with little to no conscious effort. This process decreases the need for attention and creates maximum efficiency within the motor and memory systems….” (https://en.wikipedia.org/wiki/Muscle_memory)

For musicians, every time you pickup your instrument to play major and minor chords, do you even have to think consciously about which finger to place where? It comes subconsciously. You can concentrate on the song you are playing and not worry about hitting the right chord at the right time.

Now, imagine if the position of the notes on your instrument are changed randomly. For example, instead of standard guitar tuning E A D G B E, someone changes the tuning slightly to F A D G C E. Or altering the position of every A and E for the piano. You will have to relearn your finger position for every chord and scale. You get used to that, only for the tuning to be changed again next week. And another tuning for the week after that. Will you ever be able to master the instrument or concentrate on a song? Every time you pick up the instrument, it would mean retraining your muscle memory. There is a reason why the CAPO exists.

For non musicians, think of driving a car. You had to consciously learn the brake, clutch and accelerator, till controlling the accelerator and the brake with the right foot, and the clutch with the left became muscle memory. Imagine the chaos if the accelerator is changed to the left pedal, the clutch to the middle and the brake to the right!

The hypothetical idea of changing notes on your instrument randomly is ridiculous because the tuning has been standardised for centuries and it makes complete theoretical and musical sense to play it the way it is. Same for driving.

Coming to audio engineering, when there was analog, the layout of knobs and faders on the mixer were pretty standardised. If you looked at channel 1 on the mixer, you had gain on top, then high pass filter below it, then the 3 or 4 band EQ section below, then the aux sends and finally the pan knob. At the very bottom, was the channel fader. This layout was repeated for as many channels as there were on the mixer — 16, 24, 32 or 48.

Any brand of mixer you chose, the maximum additions might have been a low pass filter, a ‘q’ on the EQ bands, and more aux sends. The most you had to move your fingers was an inch or so up or down. By the time you were familiar with running an analog console, your muscle memory was trained to make your hand automatically reach for certain places on the console based on what parameter you wanted to change. After a certain point, you didn’t even have to think of where the low frequency knob of the mixer was, or where the HPF knob was. It was like driving a car, or playing an instrument. You were only consciously thinking of the music being played and how you could make it sound better and fit the different instruments in and around each other. The technical part of moving knobs and faders happened completely subconsciously with the help of your muscle memory.

I remember having worked with larger consoles like the Soundcraft MH2 where it was so big that if you had a lot of channels to mix, you had to actually move physically a foot or more to the right to control the next section of channels. Even then, the same layout was followed, no matter to which side of the board you went. Kind of like tuning an instrument one or two whole steps down.

With the advent of digital, analog has slowly faded away and whether in studio or in live sound, digital tools and mixers have taken over in almost every genre of audio. While not trying to delve into the debatable topic of analog vs digital, I just want to draw attention to the workflow that digital has standardised (or not standardised) in the long run since it has taken over analog.

When you are mixing in a DAW (can be any), you are constantly scrolling up, down, left and right to find the channel where you want to change something. If you want to change the EQ, you have to open up another window — which itself keeps opening at a different position on the screen every time you open a different plugin — and take a second to first find where the knob is that you are looking for and then use your mouse to move it. It might not sound like a big deal as ultimately it doesn’t take more than a second or two to find what you are looking for, but every time you are looking for the knob or the fader, you are unknowingly taking your attention away from listening to the music. This creates a micro-level technical barrier between the mind and the muscle for a couple of seconds each time. All these seconds spent for every knob or fader that you are trying to change while mixing a song, which could easily be a 1000 times, add up.

Have you ever felt lost when you wanted to change some parameter but by the time you opened up the plugin, you had forgotten what you were trying to do? This would be absolutely unacceptable while playing an instrument on stage or driving a car in traffic. You are not performing at the level you could have, whether you realise it or not. Your mind is caught between knob searching mode and analytical listening mode making you less efficient.

In a live show scenario, it is even more of a barrier since you are always running against time. On an average you get around 45 minutes to mix an artist with a full band, sometimes more, but on most occasions, less. Whether you want to cut the frequency feeding back in the middle of the show, or you have to push the mids of the guitars exactly when the guitarist starts soloing, it has to be a reflexive act. Especially if you want to take your artist’s static mix to a dynamic performance, pushing and pulling the audience in every section of the song. To do that you have to rely completely on muscle memory and not waste those couple of seconds trying to recollect where the mid frequency knob on the board is.

So what is the solution to this oft overlooked issue? We will look at possible solutions and ways around in part II of this article.

Monday 6 December 2021

It's high time the audience becomes the priority at pub gigs


In 1969, humans landed on the moon for the first time. It is very difficult to understand the sheer amount of computing power and size needed at that time when computers the size of a car and costing millions of dollars were required to monitor technical data for this kind of a mission.

Today, your average Rs 10,000 smartphone is millions of times more powerful than any of the computers from that time.

This is just to put into perspective the exponential human technological progress over the years and how much computing power we as consumers hold in our hands.

Yet, when it comes to audio in live sound, we are still stuck in the ’60s-‘70s era of sound reinforcement when the big speaker companies of today were trying to develop speakers to solve the problem of how they could deliver uniform audio to the majority of the audience at a venue. Since then, there have been different speaker technologies and variations of the same designs but ultimately speaker designers have been successful to create speakers that can cater to any kind of venue and deliver a pleasurable listening experience to the audience.

When it comes to sound reinforcement of performing bands (commonly with some form of drums and/or percussion, guitar, bass, keyboard and vocals amongst other instruments) in small pubs and other such indoor venues with an audience capacity of not more than 500 people, the auditory experience is unpleasant and uncomfortable for most of the gigs with very small segments when it sounds acceptable. It is either unbearably loud, or too soft to hear the lyrics, or too muddy to comprehend what the bassist or the guitarist or the keyboard player is trying to add to the song. And before you come to the conclusion that this is just one person's opinion (mine in this case), you can ask anyone who has been to a pub gig or any gig inside a small space, and they will tell you the same. Heck, if you yourself have been in that position, you should be able to relate.

As an avid music listener, I have been going to such gigs since the early 2000s and initially I thought that is how a band is supposed to sound in a live gig. But as my listening tastes and habits evolved over the years, I started to look for answers as to why it can't sound like a bigger and more intimate version of the recorded music of the band when they are playing live, even in a small space. I started pursuing audio as a hobby in the late 2000s, and since 2010, I have been a full-time audio engineer touring with artists who play live, working for a sound rental company and also doing studio work. So this is not a layman's opinion or rant. It is more of an attempt to break down the components that contribute to the overall sound when a band or an artist is playing live and possible solutions to the issue we are discussing—how small indoor venue gigs can be made to sound much better than we are currently providing as artists, and consuming as audience. 

Most pubs are designed with the decor and ambience in mind—glasses, bare walls, reflective ceilings—the perfect ingredients to make sure the sound coming out of anything in there is reflected a 1000 times across all the surfaces and is ultimately turned into a mess. Now, you can obviously expect pubs to be built with some sort of acoustic treatment blended intelligently into the decor that will absorb the sound. But that is a long shot and in all practicality, I don't see that happening anytime in the near future. After all, the whole confusion with audio is that it is invisible and open to very different interpretations by each individual. So I don't see money being spent on fixing something which is invisible to start with.

In that case, we have to come to the second line of defence—what we are bringing into the venue. This is where the band or the artist has complete control.

Firstly, acoustic drums. I know musicians are not going to like this sentence but unless you hear it, you won't consider it. Pubs are not places where we can or should be playing acoustic drum kits. The natural sound of an acoustic drum is way too loud for such a small space. Especially the cymbal section. I can get into technical details by describing which offending frequencies are at play when an acoustic drum kit is used that makes it take over every other instrument in the mix and makes it uncomfortable at 100db-plus in small venues, but that would be a topic on its own. 

Most of the drum sound that you hear in a pub is coming directly from the drum kit. And that sound is so loud that it just completely drowns out vocals, guitars, keyboards and other mid frequency instruments. Now if you have even softer instruments like Tabla, Saarangi, Violin, Cello, you can now understand why they are almost impossible to amplify in such venues. To make everything and everyone else audible over the unmic-ed drum sound, they have to be turned up even louder than the drum kit and, already, you see where this is going. In 90% of the venues, the speaker system is not adequate enough for the vocal levels to be increased up and above the natural sound coming out of the drum kit and cymbals and even if it is, it is barely audible and always on the verge of feedback. And we cannot blame the venues for not being adequate enough to accommodate the volume of a drum kit. You don't use a chainsaw to cut vegetables inside a kitchen. 

The solution to this is to use plexiglass drum shields which reduce the sound coming out of the kit by 20-30% (even more in case of better quality shields). For even more control on the sound coming out of the source, use an electronic drum kit.

I understand it can be a mental challenge to accept not playing an acoustic drum or to be playing behind a glass shield feeling separated from the band. Even as band mates, it is not glamorous enough for your band to play an electronic drum kit but ultimately you, as an artist, have to ask the question, do I care enough about my audience to compromise a little in order to provide them a much better show in terms of audio? Today the electronic drum kits are so good that even the smallest of playing nuances are captured pretty decently. If you don't like the sound of it, try using a Drum VSTi, something like Superior Drummer or Steven Slate Drums (which also has a free version) on a laptop and triggering the sound through it. Any modern laptop with a soundcard which costs less than your smartphone is good enough to do the job. The samples in these modern drum softwares are good enough to play a gig with the most demanding drum touches and if you think you are going to lose out on the ghost notes of your acoustic snare… well, they are not audible in any case (and I think most of you already know this). Unless it is a medium-sized auditorium (or open air) where the sound from the drum kit and cymbals can diffuse into the extra space around, playing an acoustic drum is an uncomfortable experience for the audience, especially in pubs, and unless you fix this, no other change is going to make as big of a difference in terms of how good your band sounds in a pub.

So try using V-drums or even some form of plexiglass shield (if giving up acoustic drums is too difficult) for your next few pub/small venue shows. Try implementing it even if you are the gig organiser and hear the difference for yourself. If you don't own one as a drummer, beg, borrow, rent. If you are concerned about the subpar cosmetic presence a v-kit has on stage compared to an acoustic kit, try asking around for a Roland VAD series kit, which looks just like a real drum kit. If the demands keep coming, I am sure the drum vendors in the city are going to buy one to rent out just like other top level kits. 


Roland VAD series electronic drums

Once the loud drum kit issue is sorted, the next loudest instruments in the room are the guitar and bass amps. They have to be cranked up loud to be heard above the drum kit but with V-drums, the stage volume goes down considerably, so you might not have to crank them up as much.


From a technical point of view though, if drummers are able to give up their beloved acoustic drum kit, a guitar and a bass player should be considerate enough to give up their amps and go direct to the PA and monitor through wedges (I understand in-ears are not feasible budget wise for pub gigs and a lot of musicians are not comfortable with them). Even if you drop the stage volume to just wedges—instead of super loud acoustic drum, loud guitar amp, loud bass amp—it should add huge clarity to the overall PA sound and ultimately your audience will get a much better sounding gig.


Giving up guitar and bass amps is again a mental challenge, more than anything else, for musicians who are used to having them behind their back for years. The feeling of the amp pushing out air from the back adds to the performance of a lot of guitarsits who are used to playing that way for years. But again, it is a minor adjustment that you can get used to if you really care about providing an overall better sound of your band for your audience. Any digital guitar and bass modelling device is realistic enough for any genre and any kind of playing. Even if you cannot afford to buy a new hardware digital modeller, any modern day laptop with a basic sound card (which every musician owns now), a software amp modeller, any cheap midi foot controller and a bit of programming on the software should provide a great guitar/bass tone that is absolutely clean of any kind of stage bleed. You can of course use your beloved analog pedals without an amp too if you have some kind of power amp and cab modelling in your signal chain.


If you are skeptical of not being able to hear yourself properly with the full band without an amp, remember, if the acoustic drum kit is replaced by an electronic drum kit, the stage volume is reduced by more than 70%. Plexiglass shield on the other hand can only do 20-30%. So the guitar/bass on the wedge will not be drowned by the drum kit as much. My suggestion is try it and see.


And at the end of all this, the vocalist is able to hear herself/himself through the monitors without having to strain their voice. The audio guy will have much more headroom to make the vocals stand out for the audience with much more control on feedback.


But the biggest challenge in all this is a complete mental rethinking, and, as a drummer, guitarist and bassist, to give up the image of on-stage 'Rock n Roll'—as has been seen for decades with the acoustic drum kit and amps on stage—at the expense of a better listening experience for the audience. It doesn't matter if you are a band from the East or the West, physics is the same for everyone and sound behaves the same way regardless of which pub in the world such a gig is being hosted or how big an act you are. Yes, rock music is supposed to be loud but loud does not mean blasting the audience's ear off with harsh cymbals. Loud does not mean pushing up your amps so loud in the name of Rock and Roll, that the vocalist is not able to portray the words through her/his voice. Loud is when the music and performance of the band is amplified in a way that everything is clear as it can be and yet the band takes up a slightly larger than life image on stage with the help of audio and visual. You might argue that pub gigs have been taking place in that manner for a long time now. Yes, it has been, but again, more often than not, it has sounded harsh, abrasive and uncomfortable for most of the audience and even for band members on stage. Now, if you are okay with how it has been done for ages and do not think it is an issue or maybe you think that the audience has gotten used to that kind of experience, then by all means, keep doing it how it has been done.


But as an audience, you should be demanding such an auditory experience because the fact is you guys deserve better when you are paying out of your pockets to go support your favourite acts live at a pub. And, since pubs are the only place that can provide a platform for artists to perform live on a regular, and often weekly basis, if we musicians can all just think beyond our egos and make small compromises for ultimately a better show for our own audience, I think it is a win for both. 


Just to make it clear, none of the radical changes I have suggested apply for any kind of open air gigs or even medium-sized auditoriums. Remember, horses for courses. 


Ultimately, any kind of solution to any problem can only be possible when there is acceptance that the issue exists, and then, a desire to step out of the comfort zone to make the change. Otherwise humanity would have never progressed from being cavemen to where we are today. The only reason we humans as species have become the most dominant is because of our unique ability to cooperate to solve problems and compared to major world issues we are facing, audio for live gigs is a fairly easy one to solve. 


I hope we are all on the same page on this because, you know, we are not discussing climate change here ;)


Thursday 27 August 2020

Why live shows don't sound to your expectations on most occasions (Part 1)

If you have been to any live performance of any kind as an audience, be it music, talk or theater, I am sure, more often than not, you have felt that the sound is either too loud or too soft rather than at a comfortable volume. You might have also felt the vocals or some other instrument is lacking clarity, and/or even if it is at the right volume, it does not match the auditory image in your head.

Now, however much we would like to believe that every problem and its solution is black and white, life really doesn’t work that way. And when we are talking about something that is invisible in itself (I mean sound, in case you were wondering), the shade chart between black and white becomes even more diffused. When something is invisible, the confusion when referring to it becomes acute. You might be sitting in the front rows of the venue at a Zakir Hussain concert and feeling a little uncomfortable with the overall volume whenever Ustadji is smacking the skin of the tabla daya with his powerful strokes, but the person sitting right beside you might feel that the high intensity volume is exactly what makes the concert exciting. Or, you might be unfortunate to find yourself behind the last barricade at some concert, some 120 feet from the stage, and feeling you can’t hear the vocals clearly. But just beside you an elderly gentleman feels this is absolutely the perfect volume, very close to how he listens to his radio at home. Now consider a small venue, which has 500 audience. If you ask each of them about their preference of how it should sound, you will get data that will be pretty much useless in coming to a logical conclusion about how to run the audio of the show. So then, how do we deal with this conundrum? How do we come up with a stable solution when the problem itself is a moving target? If the audience were asked to come up with a solution amongst themselves before the show starts, I am sure it would be something akin to our honourable ministers trying to decide on a matter in the Parliament.

But that is exactly the responsibility the sound engineer (or whoever is handling the sound) has to take for a live performance — deciding on behalf of a room of 500 people or a field of 50,000 how it should sound so everyone who paid for the tickets goes home happy. Needless to say, more often than not, he is criticised by a certain percentage of the audience after every show because remember, there is no common target of how it should sound that the whole audience will agree on. 

There are two facets to this problem. One is the overall volume — how loud or soft the whole performance is, the other is the tonal balance — how blunt or sharp the tabla sounds or how clearly you can hear the vocals, etc. The tonal balance is an even harder target since tonality is like colours and two people might not like the same shade of green even if they both like green. So what you feel is the right amount of treble in the tabla sound, might be too much for the person right next to you. It is, again, a subjective choice and there is no right or wrong about the colour you prefer. Someone in the audience will always be disappointed but hey, you live in a world where global inequality is very real so this concept should not be hard for you to grasp.

About the clarity of the vocal, or any other instrument for that matter, you could argue why the volume can’t be increased and made more audible like you do on your earphones. You are partly right, it is just a matter of pushing the fader (the volume control for the vocal or other instruments) and making it clearer/louder. But if it was that easy, don’t you think this problem wouldn’t be so common and the audio person in charge would have done it, instead of inviting displeased comments from the audience? How much volume you can push for the clarity of vocals (or any mic-ed instrument) depends on a lot of factors, like how adequate the sound system is for that venue, how they have been placed and angled with respect to the mic and how loud the voice itself is. Imagine running a 1 tonne AC on your residential meter. It is going to run fine even on the hottest of summers as long as the room is adequately sized. But if you try to cool a room double the size of what this 1 tonne can cover, it will struggle. Conversely, if you add another AC for the other room on top of this 1 tonne one, you run the risk of frying your electric supply. You need to pay for a meter of higher load capacity. But we wouldn’t go and blame the AC manufacturer or the person responsible for installing the AC in our home in this scenario. The AC here is analogical to the sound system, the electric supply is analogical to the strength of the voice being sung into the mic and the room size is analogical to the size of the venue.

Let’s talk about the other facet of the problem — the overall volume distribution. Things might get a bit technical here onwards, so it might be more applicable for people who are involved in the organising of events where sound reinforcement is necessary. The general rule of thumb when talking about volume distribution is to have a sound system that can produce a non-distorted and full-bandwidth level of 110db SPL with a variance of + or - 3db between the front row and the last row of audience. Read that a couple of times if needed, it could be a little difficult to grasp but I will try and simplify it. What this means is, if you are standing in the middle of a venue of 50 feet by 30 feet, and you play a song to check, the sound system should be able to play it fine at a volume of 110db SPL in the middle of the venue. If you move to the first row with the song volume unchanged, the volume from the speakers should not be louder than 113db and at the last row not less than 107db. You can then be pretty confident that during the performance, the audience at the front won’t be complaining it’s too loud and the ones at the back complaining it’s too soft while people in the middle cannot understand what the other 2/3rd are whining about. So it is a kind of minimising the huge difference between the shades of black and white. Equality is never possible in audio transmission, it is always about choosing the best compromise.

Also, I mentioned the terms non-distorted and full-bandwidth, which means we cannot use speakers that can attain the 110db number but cannot reproduce sound with those qualities. An example of such a speaker is the image below.

A typical horn-only speaker that can easily reach high levels of SPL but severely lack in low distortion and full bandwidth though it's good enough just for transmission of speech

There are already solutions researched and developed by reputable audio speaker manufacturers like JBL, RCF, D&B, L-Acoustics, etc. who have all gone to the extent of designing simulation software that give you the exact number of speakers you need and where to place them to get that kind of volume or more, if required, with those qualities. So, there needs to be no guesswork involved. Once you plot the venue on the simulation software, it shows you exactly what volume every audience in the venue is receiving. If you want more, or less, it’s just a matter of changing a few parameters inside the software and checking the angles, positions and numbers of speakers to reach your desired output. Tweak till you are happy and no need to spend hours at the venue itself trying to figure all this out. These software also predict the tonality of the speakers in use (minus the acoustics of the venue) but getting a consistent tonality across every seat in the venue is a huge subject on its own, commonly known as sound system design and engineering, which takes years to learn and master. We will not be getting into that chapter here.

A line array speaker system by JBL. Notice the 8 identical speaker boxes
that make up the array. Pic credit: desch-audio.de

But as with everything else, this kind of convenience also comes for an added cost. The speakers in use (above) are purpose designed for more even distribution and are called line array systems. They are required to be flown from a hangar of some kind (also an added cost) and not put on top of tables like traditional speakers, which are called point source speakers (mostly). Not to mention, after doing all the simulation in the software you figure out your client (or you yourself) do not have the budget or the number of speakers that the software is asking you to use for that application. So, even though the solution to this problem exists on paper, in practicality, more often than not, it does not. Mostly due to budget constraints, unless you are organising a Coldplay or A.R. Rahman concert or something of that scale.

More common speaker that you see around known as
Point Source Speaker

With that constraint in mind, I would like to recommend solutions that are easier to implement if there is no budget for line arrays, or even the adequate number of boxes needed in the line array. In no way is it a replacement for a proper line array system, designed accurately as per the simulation software for that particular venue, by an experienced system engineer. Nevertheless, the show must go on, line array or not. So, instead of complaining about what we cannot do without it, let’s figure out how we can utilise the boxes at hand. We will look into a few things we can try and follow, and things we should avoid, in the second part of this article, which will be up soon. 


Saturday 22 August 2020

Don’t record your live show audio like this

Recording a 2-track stereo mix of a live show is a very simple task on its own. You take a 2-channel sound card and a laptop, take two outputs from the mixer and record it into any DAW of your choice. Or, you might even use a portable recorder from Zoom or Tascam and skip the sound card and laptop altogether. Some mixers also give you the option to record directly into a USB drive like Behringer X32. Whichever method you follow, the process of recording the audio is fairly simple. But there is only one glitch in this matrix.

 

Pic credit: Nenad Stojkovic


90% of the time the 2-track audio being recorded is assigned 'post-insert' and 'post-fader' to the Master out via a 'matrix' that includes all the eq done to the PA system to correct room anomalies or even cut out on potential feedback frequencies. All this eq done for the venue, is meant for the audience present there so they get to hear a good sounding concert, possibly without feedbacks too but definitely not meant to be a part of the recorded audio.

In layman terms, all the processing done to make those brand of speakers sound good in that room, is also getting recorded into your audio. That is like serving the tea leaves along with the tea after you have taken the hassle to carefully strain it. I am sure if you are served tea like that in a cafe, you will not accept it. Then why in the case of audio? 

Because you are not aware, well, until now at least, of how this tea should be served. If you are using that recorded audio just as reference to hear how you performed so you can identify your mistakes and improve on it next time, it is still okay to record the 2-track this way. But if you plan to release it publicly, then you should try and avoid this anomaly altogether. How big of a difference does it make you might ask. Have a look below at one instance of the PA eq for an outdoor venue done by an FOH engineer (someone who mixes the audio for the audience).

Ignore the hazy pic. Was clicked surreptitiously ;)

As you can see, the engineer felt he needed to do a lot of eq-ing to make the speakers sound good that day at that venue. Now I am not questioning whether this eq on the PA was a correct decision or not. What I am asking you to consider, as a musician or an engineer (or even a listener), is would you like this eq change to also get recorded into your audio? The audio that you are recording straight from the mixer, is going straight from the instruments via cables, via mixer, into your sound card or into some other recording device (Tascam or USB or even video camera feed). So, the speakers or the room correction eq should not have any interference in that part of the audio. Why then would you want to deteriorate it by another eq that is meant as a fix for a different problem?

You might even say that the recorded material sounded pretty fine to you when you heard it later and if you can’t hear it then why bother? Remember how when YouTube first came out, and you started watching videos in 360p (which was all that it offered at that time) it looked fine? But now when the internet is slow, you still wait for a moment to buffer it in 720p or 1080p before you watch it. Same goes for the audio that you recorded. Without the full bandwidth file to compare to, the degraded audio might sound fine on its own. Now, if you are okay with not listening to the full bandwidth and resolution of your own music, that is a perfectly valid decision. As long as you make a conscious choice, there is no right or wrong here either.

I would like to mention here that tweaking the eq-ed audio, after it has already been recorded, is possible to some extent to make it sound better, maybe in mixing or mastering or whatever shaman magic you want to call it. But then again, would you rather not tweak an audio that has better bandwidth and resolution to start with?

Also, if you are recording the audio to check if your tones sound fine, a closer to the source audio is what you need so you get a truer representation of what is coming out of your instrument.

So then, what is the solution to this problem? Just make sure you record the audio 'pre-fader' and 'pre-inserts' to your master track. 

Technically speaking, the 2-track audio for recording is routed via 2 matrix outs in the mixer. But most of the time it is being routed in post-fader configuration, which automatically includes all the eq and any other processing you have done on the master bus for that venue. If you are just a musician or a video personel or someone who is not comfortable with all these technical mumbo jumbo, just ask the sound guy who is handling the console to route the master audio 'pre-fader' and 'pre-inserts' and you should be good to go. Most digital mixers in use today, whether high end or low end, have this option. Using group outs should also be fine as long they themselves don't have some kind of processing involved.

Now, keep in mind, recording the output this way still includes all the processing being done on the individual channels like Kick, Snare, Bass, Guitars, Vocals, etc. To get absolutely unprocessed audio as is sent from your instrument to your mixer, a multitrack recording is the only way to do it. But that requires a different setup and mostly comes with a higher cost in terms of the specific mixer and the specific recording laptop and hard drive involved. A few not so expensive mixers like the X32/M32 allows inexpensive multitrack recording simply via a USB cable but audio dropouts are something that occurs on it more often than not. A serious multitrack recording with redundancy so you don't lose a second of your one-take live performance   comes with an added cost. But a one-step, better-quality stereo recording comes for free if you just follow the simple routing solution.




Tuesday 18 August 2020

DIY Bass Tones For 3 Genres


Bass guitar tones have always been a subject of confusion and conflict for both studio and live sessions. You are not alone if you’ve felt that the perfectly fine bass tone that you can achieve at home or at practice always felt different on stage or somewhere else. Or, it could be that you know the kind of tone you want but you could never dial it in 100% to your satisfaction. Or maybe, you can dial in your tone when you are playing by yourself but as soon the band or track comes in, it gets muddy and lost. 

Dialling in the right amount of low end needs a room that has the “right” acoustics. If your room has a flat low end without any modal problems, standing waves, phase smearing in time domain, right amount of frequency decay and all this shows up when it is measured with the right tests you can just create your bass tone here and be 100% sure that your bass guitar’s low end is exactly what you are hearing. And no, neither you, nor the person with the best hearing in the world, can say that your room sounds fine just because your ears say so. The human ear is a fantastic piece of engineering (by God, if you so believe) but a lot of the above acoustic problems cannot be pinpointed to precision without proper measurement. Room acoustics is pure physics and there is no amount of arbitration or subjectivity like in art, where one person’s good is another person’s bad.

Now designing a room with the “right“ acoustics might fit into one line of text on paper but, in reality, it takes a room to have certain dimensions (to start with) and then spending quite some time, effort and money to be considered as having “right” acoustics. Most bedrooms are too small to fall into this category.

I would like to mention here that a lot of you (and me) have DIY acoustic panels made for their home studio that are definitely helping your room sound better. If you don’t have any, and are considering getting it done, do not hesitate. Acoustics, however small or large in number, always help you make better decisions in audio. But having a flat low end adding up in both frequency and time domain is something that only professional mastering houses generally have, amongst other places. 

 

The point I am trying to make is dialling in the right low end is a universal problem and not yours alone. Headphones do help but even different headphones have different low frequency curves. Getting the low end right in less-than-ideal situation can be a topic for another post. But hey, there is more to bass than the low end. So, instead of trying to dial in the perfect low end, let’s explore dialling in 3 kinds of bass tones for 3 different genres of music all from DI bass tracks, which I am sure is how most musicians are recording their bass at home. A few of you might have a preamp pedal of some kind like a Sansamp or a Darkglass so I will use that for this purpose to show its versatility.

All 3 tracks featured in the video below are from artists that I have worked with. 

One is a pop-rock bass track by Abhishek Nona Bhattacharya.

Another is a folk track played by Shamik Chatterjee. 

And, lastly, a metal bass tone played by Pradyumna Laskar.

Links for the full songs have been included in the video description, if you are interested.

The main intent of this video is from the point of view of bass players rather than audio engineers. So, I will keep it simple and demonstrate how using just one bass preamp and a compressor you can get a bass tone that sounds more interesting in the context of a song rather than using the plain DI track from your guitar. There is nothing wrong with using the DI track as your tone though, and whether you like the colour from the preamp is completely an artistic choice. You can always get into complicated parallel chains where you use both the DI and the preamp tone blended to your taste. But, for this demonstration, we will just be looking at using a preamp to dial in the main sound. I will be using a free digital simulation of a Sansamp Bass Driver that most bassists seem to own in its hardware pedal format, so it is easier for you to try hands on if you own one. The plugin version is called B.O.D. by TSE. You can Google it to find the download. The compressor in use is also a free plugin by Klanghelm.

 
 

So, hopefully the above video demonstrates how just a couple of processors, hardware or software, can be used to create vastly different tones suiting the sound you want. It is always best to start with one or two units so you learn to use it to its full extent. You don’t need a lot. Hence, I suggest not to get into multi-fx processors in the beginning as the plethora of options, and even the presets, can just confuse you more.

Once you are confident of exactly what you need to create your sound, a multi-fx processor is always easier to dial in.

Remember to always spend more time capturing better performances than the perfect low end or perfect compression. And of course, brand new strings when you record ;)


Saturday 15 August 2020

DIY home recording is great but don't ignore the details

 

If you could somehow time-travel to 1965, to Abbey Road Studios (at that time known as EMI Studios) when The Beatles were recording the album Help! and tell them you would be able to do all that they were doing in your bedroom 50 years from then, there is a high chance they would scoff at you and just ignored your presence altogether as a naive studio intern.

Cut to 2020, audio technology has developed in leaps and bounds in the past decades, especially with the advent of digital and the incorporation of computers into all spheres of the process of music-making and publishing. Today, you really can record a full-length album of any genre you might imagine (including classical) with just a computer, an audio interface of some kind, a pair of decent speakers or a headphone, and the required software and instruments to meet your particular needs. This whole setup can be had for 1/10th the price of what it would have cost, at minimum, to record music 20 years back. If you can afford a mid-end smartphone, you can afford to buy this setup. All this can be in your bedroom or a spare room that you have converted for this specific purpose, which you also have lovingly named 'XYZ’ studio. You don’t have to carry your heavy instrument and go to another studio anymore, pay an hourly rent (or it could be your friend's setup for free), be commanded by the recording engineer (even though you don't want it), practice your parts well before you go into the studio to keep bill under budget, and of course, pay extra for lunch for everyone. 

At home, you have none of the above to think of. You can sit down any time you want, connect your instrument or start programming your ideas straight into the recording software (it’s called the DAW, short for Digital Audio Workstation) and work for as long as you want to without having to worry about studio bills. You can send your parts to your bandmates, who can add in theirs in their own home setup and the process can continue till you are happy with the final product. Rinse and repeat. Sounds easy and fun, right? It is, but as with anything and everything in life, all this DIY audio work also comes with limitations. 

Firstly, I am not agaist everyone having access to technology and the ability to explore their creativity with affordability and accessibility. It is truly another milestone of the modern world that this is available for almost every musician living today.

But with great accessibility comes greater responsibilities, or rather greater attention to details. There are basic technicalities involved with the operation of any equipment, however easy or complicated they might be. A musician does not have to know every function that their DAW has to offer. But he/she has to know how to dial in a proper gain structure on their soundcard, when to connect to a Mic input, an Instrument input and a Line input, whether to record to stereo track or mono, how to do proper crossfades with overlapping takes, how to edit out anything that is unmusical, how to export the tracks in the session before sending for mixing, etc. And I am not even going to go into how easy it is to be lazy and record an “okay” take when you know you could have done much better with someone else there to push you.

These are very simple things that most musicians recording at home seem to ignore, just wanting to get their ideas recorded and be done with it. There are numerous free videos on YouTube that teach these simple technicalities, and once you learn them, they become second nature. It is like driving. When you start learning to drive you have to keep track of when to press the clutch, when to change the gear, while at the same time keeping a lookout for the random pedestrian. A lot of musicians feel demotivated with the thought of having to get into the technical nitty gritties when they sit down to record, but hey, if you want to DIY and save money it does not mean you can skip the technicalities or leave it for someone else. You just have to, well, do it yourself.

Digital audio also leaves the possibility of doing a lot more at the mixing stage, so a lot of musicians prefer to leave a lot of decisions open while recording. But instead of giving references of other tracks whose snare tone you want right at the mixing stage, why not start with the reference when you are recording? Why not try to see how close you can get to your favorite bass or guitar tone before hitting the record button? Today, all guitar and bass amps have a digital simulation of some kind, which you can use to get close to your favorite tone from some other artist. You just have to do a bit of research about what gear is being used and that is easier today than getting a no-refusal yellow cab. 

Most musicians have some kind of finished idea of their song in their head. It is almost as if they can hear how the song sounds even before recording it. If you use that as a blueprint while recording and then getting the tones for your drums, bass, guitars, synths, vocals or any other instrument that you play, your song will sound much closer to the version running inside your head instead of waiting till the mixing stage to imprint that, whoever might be mixing it, be it you or your bandmate or a professional mix engineer. Yes, it takes a lot of practice and back-and-forth and critical listening to match the sound in your head to what you are actually listening to from your speakers, but that’s the ‘price’ you pay when you want DIY to save money. 

In the early days, when musicians recorded on analog consoles and tapes, there was very little that could be done at the mixing stage considering every analog eq, compressor or effect cost a lot of money. So they had to get most of the sound imparted while recording. By the time the mixing started, the song already sounded close to what they had visioned and mixing couldn't make or break it. The song and recording had already made it from the artist's mind.

If you keep recording your song with the lookout that certain things will be fixed during mixing, more often than not you will be disappointed. So, it’s best to research what you want even before you record or consult with a professional on how you could achieve that sound.


Finally, the most important part of any audio work is the monitoring, which includes the speakers that you are listening through and your room. What you are hearing is not actually what is being recorded. Unless you constantly keep referencing on decent headphones with reference tracks, you are bound to be misled by your speakers and the room. It is a limitation of home setups where any kind of meaningful acoustic treatment and proper speaker placement is not possible. But then again DIY has its limitations.

I am not trying to discourage any musician trying to do DIY home recordings, but for the sake of their own music they should try to be aware of the problems that come with it. Only when you accept the problems, can you take necessary steps to try to solve them. With the plethora of information available right now, finding ways to tackle these problems is also very accessible. Agreed some of them come with a cost. But if you go in with the mindset that i will take whatever comes, you will slowly lose love and motivation for getting better at your craft. Making and recording music to sound like the music that inspired you to make it, that comes for a price.