Thursday 27 August 2020

Why live shows don't sound to your expectations on most occasions (Part 1)

If you have been to any live performance of any kind as an audience, be it music, talk or theater, I am sure, more often than not, you have felt that the sound is either too loud or too soft rather than at a comfortable volume. You might have also felt the vocals or some other instrument is lacking clarity, and/or even if it is at the right volume, it does not match the auditory image in your head.

Now, however much we would like to believe that every problem and its solution is black and white, life really doesn’t work that way. And when we are talking about something that is invisible in itself (I mean sound, in case you were wondering), the shade chart between black and white becomes even more diffused. When something is invisible, the confusion when referring to it becomes acute. You might be sitting in the front rows of the venue at a Zakir Hussain concert and feeling a little uncomfortable with the overall volume whenever Ustadji is smacking the skin of the tabla daya with his powerful strokes, but the person sitting right beside you might feel that the high intensity volume is exactly what makes the concert exciting. Or, you might be unfortunate to find yourself behind the last barricade at some concert, some 120 feet from the stage, and feeling you can’t hear the vocals clearly. But just beside you an elderly gentleman feels this is absolutely the perfect volume, very close to how he listens to his radio at home. Now consider a small venue, which has 500 audience. If you ask each of them about their preference of how it should sound, you will get data that will be pretty much useless in coming to a logical conclusion about how to run the audio of the show. So then, how do we deal with this conundrum? How do we come up with a stable solution when the problem itself is a moving target? If the audience were asked to come up with a solution amongst themselves before the show starts, I am sure it would be something akin to our honourable ministers trying to decide on a matter in the Parliament.

But that is exactly the responsibility the sound engineer (or whoever is handling the sound) has to take for a live performance — deciding on behalf of a room of 500 people or a field of 50,000 how it should sound so everyone who paid for the tickets goes home happy. Needless to say, more often than not, he is criticised by a certain percentage of the audience after every show because remember, there is no common target of how it should sound that the whole audience will agree on. 

There are two facets to this problem. One is the overall volume — how loud or soft the whole performance is, the other is the tonal balance — how blunt or sharp the tabla sounds or how clearly you can hear the vocals, etc. The tonal balance is an even harder target since tonality is like colours and two people might not like the same shade of green even if they both like green. So what you feel is the right amount of treble in the tabla sound, might be too much for the person right next to you. It is, again, a subjective choice and there is no right or wrong about the colour you prefer. Someone in the audience will always be disappointed but hey, you live in a world where global inequality is very real so this concept should not be hard for you to grasp.

About the clarity of the vocal, or any other instrument for that matter, you could argue why the volume can’t be increased and made more audible like you do on your earphones. You are partly right, it is just a matter of pushing the fader (the volume control for the vocal or other instruments) and making it clearer/louder. But if it was that easy, don’t you think this problem wouldn’t be so common and the audio person in charge would have done it, instead of inviting displeased comments from the audience? How much volume you can push for the clarity of vocals (or any mic-ed instrument) depends on a lot of factors, like how adequate the sound system is for that venue, how they have been placed and angled with respect to the mic and how loud the voice itself is. Imagine running a 1 tonne AC on your residential meter. It is going to run fine even on the hottest of summers as long as the room is adequately sized. But if you try to cool a room double the size of what this 1 tonne can cover, it will struggle. Conversely, if you add another AC for the other room on top of this 1 tonne one, you run the risk of frying your electric supply. You need to pay for a meter of higher load capacity. But we wouldn’t go and blame the AC manufacturer or the person responsible for installing the AC in our home in this scenario. The AC here is analogical to the sound system, the electric supply is analogical to the strength of the voice being sung into the mic and the room size is analogical to the size of the venue.

Let’s talk about the other facet of the problem — the overall volume distribution. Things might get a bit technical here onwards, so it might be more applicable for people who are involved in the organising of events where sound reinforcement is necessary. The general rule of thumb when talking about volume distribution is to have a sound system that can produce a non-distorted and full-bandwidth level of 110db SPL with a variance of + or - 3db between the front row and the last row of audience. Read that a couple of times if needed, it could be a little difficult to grasp but I will try and simplify it. What this means is, if you are standing in the middle of a venue of 50 feet by 30 feet, and you play a song to check, the sound system should be able to play it fine at a volume of 110db SPL in the middle of the venue. If you move to the first row with the song volume unchanged, the volume from the speakers should not be louder than 113db and at the last row not less than 107db. You can then be pretty confident that during the performance, the audience at the front won’t be complaining it’s too loud and the ones at the back complaining it’s too soft while people in the middle cannot understand what the other 2/3rd are whining about. So it is a kind of minimising the huge difference between the shades of black and white. Equality is never possible in audio transmission, it is always about choosing the best compromise.

Also, I mentioned the terms non-distorted and full-bandwidth, which means we cannot use speakers that can attain the 110db number but cannot reproduce sound with those qualities. An example of such a speaker is the image below.

A typical horn-only speaker that can easily reach high levels of SPL but severely lack in low distortion and full bandwidth though it's good enough just for transmission of speech

There are already solutions researched and developed by reputable audio speaker manufacturers like JBL, RCF, D&B, L-Acoustics, etc. who have all gone to the extent of designing simulation software that give you the exact number of speakers you need and where to place them to get that kind of volume or more, if required, with those qualities. So, there needs to be no guesswork involved. Once you plot the venue on the simulation software, it shows you exactly what volume every audience in the venue is receiving. If you want more, or less, it’s just a matter of changing a few parameters inside the software and checking the angles, positions and numbers of speakers to reach your desired output. Tweak till you are happy and no need to spend hours at the venue itself trying to figure all this out. These software also predict the tonality of the speakers in use (minus the acoustics of the venue) but getting a consistent tonality across every seat in the venue is a huge subject on its own, commonly known as sound system design and engineering, which takes years to learn and master. We will not be getting into that chapter here.

A line array speaker system by JBL. Notice the 8 identical speaker boxes
that make up the array. Pic credit: desch-audio.de

But as with everything else, this kind of convenience also comes for an added cost. The speakers in use (above) are purpose designed for more even distribution and are called line array systems. They are required to be flown from a hangar of some kind (also an added cost) and not put on top of tables like traditional speakers, which are called point source speakers (mostly). Not to mention, after doing all the simulation in the software you figure out your client (or you yourself) do not have the budget or the number of speakers that the software is asking you to use for that application. So, even though the solution to this problem exists on paper, in practicality, more often than not, it does not. Mostly due to budget constraints, unless you are organising a Coldplay or A.R. Rahman concert or something of that scale.

More common speaker that you see around known as
Point Source Speaker

With that constraint in mind, I would like to recommend solutions that are easier to implement if there is no budget for line arrays, or even the adequate number of boxes needed in the line array. In no way is it a replacement for a proper line array system, designed accurately as per the simulation software for that particular venue, by an experienced system engineer. Nevertheless, the show must go on, line array or not. So, instead of complaining about what we cannot do without it, let’s figure out how we can utilise the boxes at hand. We will look into a few things we can try and follow, and things we should avoid, in the second part of this article, which will be up soon. 


Saturday 22 August 2020

Don’t record your live show audio like this

Recording a 2-track stereo mix of a live show is a very simple task on its own. You take a 2-channel sound card and a laptop, take two outputs from the mixer and record it into any DAW of your choice. Or, you might even use a portable recorder from Zoom or Tascam and skip the sound card and laptop altogether. Some mixers also give you the option to record directly into a USB drive like Behringer X32. Whichever method you follow, the process of recording the audio is fairly simple. But there is only one glitch in this matrix.

 

Pic credit: Nenad Stojkovic


90% of the time the 2-track audio being recorded is assigned 'post-insert' and 'post-fader' to the Master out via a 'matrix' that includes all the eq done to the PA system to correct room anomalies or even cut out on potential feedback frequencies. All this eq done for the venue, is meant for the audience present there so they get to hear a good sounding concert, possibly without feedbacks too but definitely not meant to be a part of the recorded audio.

In layman terms, all the processing done to make those brand of speakers sound good in that room, is also getting recorded into your audio. That is like serving the tea leaves along with the tea after you have taken the hassle to carefully strain it. I am sure if you are served tea like that in a cafe, you will not accept it. Then why in the case of audio? 

Because you are not aware, well, until now at least, of how this tea should be served. If you are using that recorded audio just as reference to hear how you performed so you can identify your mistakes and improve on it next time, it is still okay to record the 2-track this way. But if you plan to release it publicly, then you should try and avoid this anomaly altogether. How big of a difference does it make you might ask. Have a look below at one instance of the PA eq for an outdoor venue done by an FOH engineer (someone who mixes the audio for the audience).

Ignore the hazy pic. Was clicked surreptitiously ;)

As you can see, the engineer felt he needed to do a lot of eq-ing to make the speakers sound good that day at that venue. Now I am not questioning whether this eq on the PA was a correct decision or not. What I am asking you to consider, as a musician or an engineer (or even a listener), is would you like this eq change to also get recorded into your audio? The audio that you are recording straight from the mixer, is going straight from the instruments via cables, via mixer, into your sound card or into some other recording device (Tascam or USB or even video camera feed). So, the speakers or the room correction eq should not have any interference in that part of the audio. Why then would you want to deteriorate it by another eq that is meant as a fix for a different problem?

You might even say that the recorded material sounded pretty fine to you when you heard it later and if you can’t hear it then why bother? Remember how when YouTube first came out, and you started watching videos in 360p (which was all that it offered at that time) it looked fine? But now when the internet is slow, you still wait for a moment to buffer it in 720p or 1080p before you watch it. Same goes for the audio that you recorded. Without the full bandwidth file to compare to, the degraded audio might sound fine on its own. Now, if you are okay with not listening to the full bandwidth and resolution of your own music, that is a perfectly valid decision. As long as you make a conscious choice, there is no right or wrong here either.

I would like to mention here that tweaking the eq-ed audio, after it has already been recorded, is possible to some extent to make it sound better, maybe in mixing or mastering or whatever shaman magic you want to call it. But then again, would you rather not tweak an audio that has better bandwidth and resolution to start with?

Also, if you are recording the audio to check if your tones sound fine, a closer to the source audio is what you need so you get a truer representation of what is coming out of your instrument.

So then, what is the solution to this problem? Just make sure you record the audio 'pre-fader' and 'pre-inserts' to your master track. 

Technically speaking, the 2-track audio for recording is routed via 2 matrix outs in the mixer. But most of the time it is being routed in post-fader configuration, which automatically includes all the eq and any other processing you have done on the master bus for that venue. If you are just a musician or a video personel or someone who is not comfortable with all these technical mumbo jumbo, just ask the sound guy who is handling the console to route the master audio 'pre-fader' and 'pre-inserts' and you should be good to go. Most digital mixers in use today, whether high end or low end, have this option. Using group outs should also be fine as long they themselves don't have some kind of processing involved.

Now, keep in mind, recording the output this way still includes all the processing being done on the individual channels like Kick, Snare, Bass, Guitars, Vocals, etc. To get absolutely unprocessed audio as is sent from your instrument to your mixer, a multitrack recording is the only way to do it. But that requires a different setup and mostly comes with a higher cost in terms of the specific mixer and the specific recording laptop and hard drive involved. A few not so expensive mixers like the X32/M32 allows inexpensive multitrack recording simply via a USB cable but audio dropouts are something that occurs on it more often than not. A serious multitrack recording with redundancy so you don't lose a second of your one-take live performance   comes with an added cost. But a one-step, better-quality stereo recording comes for free if you just follow the simple routing solution.




Tuesday 18 August 2020

DIY Bass Tones For 3 Genres


Bass guitar tones have always been a subject of confusion and conflict for both studio and live sessions. You are not alone if you’ve felt that the perfectly fine bass tone that you can achieve at home or at practice always felt different on stage or somewhere else. Or, it could be that you know the kind of tone you want but you could never dial it in 100% to your satisfaction. Or maybe, you can dial in your tone when you are playing by yourself but as soon the band or track comes in, it gets muddy and lost. 

Dialling in the right amount of low end needs a room that has the “right” acoustics. If your room has a flat low end without any modal problems, standing waves, phase smearing in time domain, right amount of frequency decay and all this shows up when it is measured with the right tests you can just create your bass tone here and be 100% sure that your bass guitar’s low end is exactly what you are hearing. And no, neither you, nor the person with the best hearing in the world, can say that your room sounds fine just because your ears say so. The human ear is a fantastic piece of engineering (by God, if you so believe) but a lot of the above acoustic problems cannot be pinpointed to precision without proper measurement. Room acoustics is pure physics and there is no amount of arbitration or subjectivity like in art, where one person’s good is another person’s bad.

Now designing a room with the “right“ acoustics might fit into one line of text on paper but, in reality, it takes a room to have certain dimensions (to start with) and then spending quite some time, effort and money to be considered as having “right” acoustics. Most bedrooms are too small to fall into this category.

I would like to mention here that a lot of you (and me) have DIY acoustic panels made for their home studio that are definitely helping your room sound better. If you don’t have any, and are considering getting it done, do not hesitate. Acoustics, however small or large in number, always help you make better decisions in audio. But having a flat low end adding up in both frequency and time domain is something that only professional mastering houses generally have, amongst other places. 

 

The point I am trying to make is dialling in the right low end is a universal problem and not yours alone. Headphones do help but even different headphones have different low frequency curves. Getting the low end right in less-than-ideal situation can be a topic for another post. But hey, there is more to bass than the low end. So, instead of trying to dial in the perfect low end, let’s explore dialling in 3 kinds of bass tones for 3 different genres of music all from DI bass tracks, which I am sure is how most musicians are recording their bass at home. A few of you might have a preamp pedal of some kind like a Sansamp or a Darkglass so I will use that for this purpose to show its versatility.

All 3 tracks featured in the video below are from artists that I have worked with. 

One is a pop-rock bass track by Abhishek Nona Bhattacharya.

Another is a folk track played by Shamik Chatterjee. 

And, lastly, a metal bass tone played by Pradyumna Laskar.

Links for the full songs have been included in the video description, if you are interested.

The main intent of this video is from the point of view of bass players rather than audio engineers. So, I will keep it simple and demonstrate how using just one bass preamp and a compressor you can get a bass tone that sounds more interesting in the context of a song rather than using the plain DI track from your guitar. There is nothing wrong with using the DI track as your tone though, and whether you like the colour from the preamp is completely an artistic choice. You can always get into complicated parallel chains where you use both the DI and the preamp tone blended to your taste. But, for this demonstration, we will just be looking at using a preamp to dial in the main sound. I will be using a free digital simulation of a Sansamp Bass Driver that most bassists seem to own in its hardware pedal format, so it is easier for you to try hands on if you own one. The plugin version is called B.O.D. by TSE. You can Google it to find the download. The compressor in use is also a free plugin by Klanghelm.

 
 

So, hopefully the above video demonstrates how just a couple of processors, hardware or software, can be used to create vastly different tones suiting the sound you want. It is always best to start with one or two units so you learn to use it to its full extent. You don’t need a lot. Hence, I suggest not to get into multi-fx processors in the beginning as the plethora of options, and even the presets, can just confuse you more.

Once you are confident of exactly what you need to create your sound, a multi-fx processor is always easier to dial in.

Remember to always spend more time capturing better performances than the perfect low end or perfect compression. And of course, brand new strings when you record ;)


Saturday 15 August 2020

DIY home recording is great but don't ignore the details

 

If you could somehow time-travel to 1965, to Abbey Road Studios (at that time known as EMI Studios) when The Beatles were recording the album Help! and tell them you would be able to do all that they were doing in your bedroom 50 years from then, there is a high chance they would scoff at you and just ignored your presence altogether as a naive studio intern.

Cut to 2020, audio technology has developed in leaps and bounds in the past decades, especially with the advent of digital and the incorporation of computers into all spheres of the process of music-making and publishing. Today, you really can record a full-length album of any genre you might imagine (including classical) with just a computer, an audio interface of some kind, a pair of decent speakers or a headphone, and the required software and instruments to meet your particular needs. This whole setup can be had for 1/10th the price of what it would have cost, at minimum, to record music 20 years back. If you can afford a mid-end smartphone, you can afford to buy this setup. All this can be in your bedroom or a spare room that you have converted for this specific purpose, which you also have lovingly named 'XYZ’ studio. You don’t have to carry your heavy instrument and go to another studio anymore, pay an hourly rent (or it could be your friend's setup for free), be commanded by the recording engineer (even though you don't want it), practice your parts well before you go into the studio to keep bill under budget, and of course, pay extra for lunch for everyone. 

At home, you have none of the above to think of. You can sit down any time you want, connect your instrument or start programming your ideas straight into the recording software (it’s called the DAW, short for Digital Audio Workstation) and work for as long as you want to without having to worry about studio bills. You can send your parts to your bandmates, who can add in theirs in their own home setup and the process can continue till you are happy with the final product. Rinse and repeat. Sounds easy and fun, right? It is, but as with anything and everything in life, all this DIY audio work also comes with limitations. 

Firstly, I am not agaist everyone having access to technology and the ability to explore their creativity with affordability and accessibility. It is truly another milestone of the modern world that this is available for almost every musician living today.

But with great accessibility comes greater responsibilities, or rather greater attention to details. There are basic technicalities involved with the operation of any equipment, however easy or complicated they might be. A musician does not have to know every function that their DAW has to offer. But he/she has to know how to dial in a proper gain structure on their soundcard, when to connect to a Mic input, an Instrument input and a Line input, whether to record to stereo track or mono, how to do proper crossfades with overlapping takes, how to edit out anything that is unmusical, how to export the tracks in the session before sending for mixing, etc. And I am not even going to go into how easy it is to be lazy and record an “okay” take when you know you could have done much better with someone else there to push you.

These are very simple things that most musicians recording at home seem to ignore, just wanting to get their ideas recorded and be done with it. There are numerous free videos on YouTube that teach these simple technicalities, and once you learn them, they become second nature. It is like driving. When you start learning to drive you have to keep track of when to press the clutch, when to change the gear, while at the same time keeping a lookout for the random pedestrian. A lot of musicians feel demotivated with the thought of having to get into the technical nitty gritties when they sit down to record, but hey, if you want to DIY and save money it does not mean you can skip the technicalities or leave it for someone else. You just have to, well, do it yourself.

Digital audio also leaves the possibility of doing a lot more at the mixing stage, so a lot of musicians prefer to leave a lot of decisions open while recording. But instead of giving references of other tracks whose snare tone you want right at the mixing stage, why not start with the reference when you are recording? Why not try to see how close you can get to your favorite bass or guitar tone before hitting the record button? Today, all guitar and bass amps have a digital simulation of some kind, which you can use to get close to your favorite tone from some other artist. You just have to do a bit of research about what gear is being used and that is easier today than getting a no-refusal yellow cab. 

Most musicians have some kind of finished idea of their song in their head. It is almost as if they can hear how the song sounds even before recording it. If you use that as a blueprint while recording and then getting the tones for your drums, bass, guitars, synths, vocals or any other instrument that you play, your song will sound much closer to the version running inside your head instead of waiting till the mixing stage to imprint that, whoever might be mixing it, be it you or your bandmate or a professional mix engineer. Yes, it takes a lot of practice and back-and-forth and critical listening to match the sound in your head to what you are actually listening to from your speakers, but that’s the ‘price’ you pay when you want DIY to save money. 

In the early days, when musicians recorded on analog consoles and tapes, there was very little that could be done at the mixing stage considering every analog eq, compressor or effect cost a lot of money. So they had to get most of the sound imparted while recording. By the time the mixing started, the song already sounded close to what they had visioned and mixing couldn't make or break it. The song and recording had already made it from the artist's mind.

If you keep recording your song with the lookout that certain things will be fixed during mixing, more often than not you will be disappointed. So, it’s best to research what you want even before you record or consult with a professional on how you could achieve that sound.


Finally, the most important part of any audio work is the monitoring, which includes the speakers that you are listening through and your room. What you are hearing is not actually what is being recorded. Unless you constantly keep referencing on decent headphones with reference tracks, you are bound to be misled by your speakers and the room. It is a limitation of home setups where any kind of meaningful acoustic treatment and proper speaker placement is not possible. But then again DIY has its limitations.

I am not trying to discourage any musician trying to do DIY home recordings, but for the sake of their own music they should try to be aware of the problems that come with it. Only when you accept the problems, can you take necessary steps to try to solve them. With the plethora of information available right now, finding ways to tackle these problems is also very accessible. Agreed some of them come with a cost. But if you go in with the mindset that i will take whatever comes, you will slowly lose love and motivation for getting better at your craft. Making and recording music to sound like the music that inspired you to make it, that comes for a price.