I’m not sure what to call this. A couple of years ago, I started doing a podcast. After about a year, I kind of stopped.
Simultaneously, I’ve been trying to get a handle on video for many years, and I recently decided that the only way I was going to accomplish that goal was to just jump in. So this is kind of vlog test I guess.
I don’t have a concrete handle on the format, and things like intros and outros. So I’m going to just give it a shot and work out the details as I go.
With that out of the way, this specific video is partly a rant and partly an open question that maybe someone can explain what I’m missing.
When it comes to audio recording I’m pretty much a noob.
So today I want to talk about two lines of reasoning that I keep seeing repeated over and over regarding using your camera to record sound.
Now I want to state right out here, I don’t really have a dog in this fight, so to speak. What I mean is I don’t have a vested interest in proving or disproving the arguments.
For me, what’s more important is having a sound fact-based understanding of the processes involved so I can make informed decisions in the field.
With that said, lets dig into the bits of advice I’m questioning.
In my searches to learn about audio and video recording one point comes up over and over. Don’t use your DSLR’s mic in to record your sound, use a separate dedicated recorder.
Most presenters go on to give a host of reasons that you should do this, but almost invariably two come up over and over.
First, the camera’s mic preamps are crap, and second, the cameras can only record 16-bit files not 24-bit ones.
Let me start with the first point.
Simply put, while I can find a lot of people saying this, I can’t find any actual testing to support the claim. I can’t say that the people making it are wrong. However, I also can’t say they’re right, because I simply can’t find any actual data.
Personally, I find the argument hard to believe.
Bering in mind, the light captured by the imaging sensor has to go through all the same stages as the sound does; it’s amplified, and converted into a digital signal.
So I’m supposed to believe, that Canon, Nikon, Sony and everyone else, can design the analog circuits for their sensors that can operate in the megahertz range, but can’t do the same thing for sound in the tens of kilohertz range. But Zoom, Tascam, or any of the companies that make audio recorders can?
That doesn’t seem reasonable to me.
I could go on, but I don’t really see the point in doing so.
When push comes to shove, what I’d like to see some data. If you nobody can come up with something meaningful, I think we should stop telling people that their camera’s preamps suck until we can say for sure that they do.
I mean there are plenty of good arguments that can be made and supported instead.
My second point of contention is bit depth.
Put simply, bit depth controls the dynamic range that can be recorded. It’s not really any different than in still photography.
Well, the terminology used is to discuss it different, but the overreaching point is the same.
Where we would use stops in photography, they use decibels in the audio world.
If you want to translate, 6 db translates to 1 stop.
Where things start getting confusing, is that audio people don’t use just one decibel scale.
That said, the increment, 1 decibel, is the same size step in all the scales.
So the argument goes something like this.
A 16-bit file has a dynamic range of 96 decibels, while a 24-bit file has a dynamic range of 144 decibels. This is 256 times more range!
Of course, the direct implication is that this added range is really important. Otherwise, we wouldn’t be arguing that cameras are crap because they don’t support 24-bit audio.
The problem I have here, is the same problem I have with so many photographers talking about dynamic range and cameras. Ultimately the dynamic range in the image is dictated by the dynamic range of the scene. Having more range doesn’t help you if the scene is the limiting factor.
With that said, lets talk about sound for a second. If we want to talk about how loud sound is, we’re usually talking about the dBSPL scale.
Zero dBSPL is the nominal threshold for human hearing.
A quite library is around 40 dB. A typical conversation is around 60 dB. The threshold for discomfort is 120 dB, and the threshold for pain is 130 dB.
Though anything above 90 dB can cause hearing damage, depending on exposure.
Lets look at an example of something in practice.
I know from where I’ve adjusted my mic levels to that when I’m speaking my voice is about 30 dB louder than the noise floor. Moreover, I’m using a pretty standard recording practice to keep my voice around -12 dBFS on my recorder, meaning I have a further 12 dB of headroom that I have to account for.
That gives me a total range of 42 dB from the loudest sound my recorder won’t clip on, to the quietest sound that gets lost in the room’s noise floor.
A 16-bit file provides 96 dB of range. That’s more than twice the range that I can reasonably get out of my environment.
I can’t record quieter sounds. The noise floor prevents that. I could adjust gain so that there’s more headroom, but I’m recording myself talking. I don’t plan to intersperse that with screaming and shooting off fireworks.
Sure, I could try to lower the noise floor in this room. But there’s a limit to that too. A good quality TV studio will have a noise floor around 20 dB. That takes my range from 42 dB to 62 dB, still short of 96 dB.
I’m pretty certain that I’m not going to get the noise floor down to 0 dB, but even if I could that’s still at total range of 82 dB which is still less than 96 dB.
Simply put, I don’t see 16-bits as being a real limiting factor. I’m having a hard time figuring out how often I would want to record the range from the quietest sound possibly discernable to the human ear, to a jet plane taking off.
Then again, maybe I’m missing something. But it seems to me that not having 24-bit files isn’t a determent per se. It certainly could be, in some situations though.