I’ve long been thinking about posting on some concepts that I’ve been thinking about for features for cameras. Things that I’ve yet to see actually implemented but I think would be useful to have, why, and maybe look at how they could be implemented.
This time, I want to talk about an idea that’s been floating around in my head for a couple of months now. Especially with Canon and Nikon announcing their latest flagship cameras that have blazing fast frame rates. The idea, as called out in the title is an adaptive continuous drive mode.
In my experience, the vast majority of times that I’ve used continuos drive the high frame rate wasn’t the ends, it was just a means to an end. More specifically, a means to overcoming a limitation of my own. For example, when I’m shooting a bird in flight, or a whale breaching, what I almost always want is a single photo at the optimal point of action, not the whole sequence.
At the same time, I’ve found that there are real, though not real significant, drawbacks to continuous drive modes. Namely, using them creates a lot of images. That in turn uses a lot of storage space, and has an impact of the difficulty of editing them. By far the two hardest sets of images I’ve had to edit were days that made extensive use of continuous release at 10 FPS either with similar subject matter or just from the sheer number of images.
Either there were just a massive amount of images to sort through, or the differences between images become quite small (say a feather position or the position of the nictitating membrane) and the sorting process becomes much more mentally taxing.
There are also the cases where you known that continuous release isn’t needed now, but will be needed at the spur of a moment. For example, if you’re photographing a leopard sitting in a tree waiting for it to jump down. You can’t afford the delay of trying to change the drive mode when the action happens, yet for the most part anything you shoot while it’s sitting still doesn’t need more than a single frame. (Of course, maybe you’re better than I am, and can consistently squeeze off one frame form a 10–14 FPS camera.)
At least that’s my thinking, very rarely do I need all of the FPS my camera can deliver, and when I do, often it’s interspersed between not needing very many FPS at all.
So suppose instead of having to always run the camera at say 10 FPS, the camera will adjust the FPS based on the scene and camera motion.
Most modern, mid to high end DLSRs are now shipping with 100,000 or more pixel RGB metering sensors. These sensors, in many cases have enabled face detection for AF point selection.
The ability to detect faces off a 100–200K pixel sensor suggests to me that there is sufficient resolution in that sensor to make estimations of the overall change over time by differencing brightness/color information. Additionally, you could integrate camera motion data form onboard accelerometers to provide information about panning and camera motion that might affect the desired release rates.
I’m going to wrap this though up with a quick bit on how I see using this idea.
Suppose you’re photographing a leopard or lion lounging up on a tree, and you know that eventually it’s going to jump down, which is the kind of action shot you’re looking for. You’re in a static land rover with a beanbag support for the camera, which is a pretty typical setup. Of course, it’s not going to yell over and tell you hey I’m going to jump down in a couple of minutes. And while it’s up there lounging, there’s going to be plenty of moments you’ll want to shoot to.
In this situation as it currently exists I’d set my camera to continuous high, and probably the max frame rate too, and just deal with the fact that every time I hit the shutter release I’m getting 3 images instead of 1. With adaptive continuous drive as I envision it, I could set the camera to a minimum of 3 FPS, a maximum of 14 FPS, and higher sensitivity to both camera and subject motion. Since I’m shooting from a relatively stable platform (a bean bag on the land rover), most shots won’t have much if any camera or subject motion, and I’ll be able to cleanly get single frames. When the cat jumps down, the camera motion, as well as the radical changes in frame from shot to shot, would cause the camera to jump to full speed and I’d get a nice burst when I needed it without having to muck with settings.
I’m going to wrap this up here, as I really just wanted to broach the idea more than anything. At the present I’m considering doing some more technical work on this at least on the aspects I can simulate and test, so I’ll probably be revisiting this in the future. In the mean time, I’d love to here what fellow photographers think about the idea.