I’m just going to throw this out there; in photographic conversations, focal length doesn’t mean what most think it does. Moreover, it almost never means what the optical engineers means when they say focal length. I’d even go as far as the say the real meaning of focal length is one of the dirty secrets in photography.
What is focal length?
The optical definition of focal length is a measure of how strongly an optical system bends light. The shorter the focal length, the more strongly the lens bends light, the longer the focal length the less the sharply the light is bent. That’s it, full stop.
If you look at the diagram to the right, that is all that there is. There’s no camera, no frame and no subject. You can substitute any numbers you want for the focal lengths it doesn’t matter.
Focal length is a property of the lens and only the lens. The only way to change it is to change the lens somehow. A zoom lens does this by rearranging parts of the lens, thus changing the lens and the focal length. Likewise, a teleconverter can be thought of as creating a new lens that is the original lens + the teleconverter that has the focal length adjusted by the teleconverter’s power.
Placing a “crop” sensor behind the lens, however, does not change the lens and therefore the focal length stays the same.
The trouble with focal length is that it doesn’t tell us anything useful about the image the lens will produce.
As an exercise, ask yourself this, what is a 50mm lens?
If you said it’s a normal lens, that produces an image similar to what you normally see, you’d be right and wrong too. Put that lens on a 4/3rds camera and you have a telephoto lens, put it on a medium format camera and you have a wide-angle lens. The focal length is the same, but the angle of view is different, and so is how the lens is used.
This is why I say focal length in the optical sense, is largely a meaningless number for photographers. We don’t work around how much a lens bends light, we look at a scene and want to fill the frame we have with some composition. To put a technical word to this, what we’re interested in is the field of view.
Field of view is certainly related to focal length, but it’s also related to the size of the frame that’s capturing the image. If the field of view changes because we change frame sizes, the natural response isn’t to allow our composition to change, but to change the focal lengths or move the camera to restore the composition to the way we wanted it.
This is the dirty secret: When photographers talk about focal lengths, they’re really talking field of view not how much the lens bends light.
Just about every camera company has added image stabilization to their line of cameras or lenses. The implementations vary, but the objective is the same; make images sharper when taken at lower shutter speeds by minimizing the effects of camera movement.
The question is, is a faster lens better, or worse than a slower lens with some form of stabilization.
Motion: The enemy of sharpness
There are two kinds of motion that can blur images, camera motion, and subject motion. Camera motion is the little bit of rocking and shaking the photographer makes when standing still, the vibration induced by the mirror flipping up, or any number of small movements and vibrations that happen to the camera while a photo is being made. While the movement may be imperceptibly small to us, to the camera and lens these movements are actually quite large.
Subject motion, on the other hand, is just that, your subject moving around on you. You could also lump into subject motion the relative motion of the subject and photographer if the photographer is moving. While not exactly subject motion, its effects are the same.
Of all the motions that can affect the sharpness of an image, the image stabilizer can only affect one of them, camera shake. Specifically it can only reduce vibrations in the plane of the sensor (i.e. up-down and left-right) and in some cases tilt. Moreover, the stabilization system can only correct relatively small motions.
In short, the stabilizer can reduce some camera shake and that’s it.
Aperture: More Light, More Speed
The aperture on the other hand controls how much light can reach the sensor. In this case, we’re looking at maximum apertures, since they let the most light in. More light, in a similarly light environment, means exposing the film for less time (i.e. increasing the shutter speed).
The practical result of shorter exposures is less motion of any kind.
Looking towards the Future
Ultimately, the question might become a moot point.
Digital camera sensors and noise reduction algorithms are improving every year. Today’s ISO 12800 is as good as yesterday’s ISO 1600. Further, as the technology is refined, the number of lenses that can have stabilization systems is increasing; and that’s not even considering the cameras with in-body stabilizers that stabilize all lenses. It’s certainly possible that in a few years, the only choice will be between a fast stabilized-lens and a slow stabilized-lens.
In Short: Which way to go
Image stabilization is not always a substitute for faster shutter speeds, especially when the subject is moving. While an image stabilizer will help improve sharpness of static objects at all shutter speeds, at least when hand holding, it won’t stop a moving subject at all.
The million-dollar question is do you shoot moving subjects in low light or not?
If you do, you’ll probably be better served with a faster lens, even if it’s not stabilized.
It’s fall, Photokina 2010 is coming up, and just a week ago Nikon announced their fall new products. Today, it’s Canon’s turn.
EOS 60D: High Low-End, not Low High-End
First up is the impressive as a consumer class camera, but maybe not so much as a replacement for the EOS 50D. What’s gone is the Aluminum/Magnesium body, the 6.3 FPS frame rate, AF micro adjustments, and compact flash cards.
Out with the old in with the new. New to the EOS 60D, among other things, is:
A 63-zone color sensitive meter
+/- 3 Stops of exposure compensation
A combination multi-controller and rear dial ripped right from the PowerShot G11
An articulating high resolution screen and wireless flash control
In camera image resizing
Creative image filters (soft focus, grainy B&W, Toy Camera effect, and a tilt-shift effect)
Clearly canon is putting more and more focus on video in their new SLRs, though the lack of video information on the Canon USA product page is a bit of an odd oversight. The EOS 60D has full manual video control, including manual gain control of the audio. It also has in camera editing functionality, which I guess is handy given what appears to be the target market for the EOS 60D.
Canon announced today that they’ve developed a 120MP (that’s a 13,280 x 9,184 pixel image) APS-H format sensor that has a laundry list of features. The sensor can be completely read out in about 10ms, resulting in a 9.5 FPS frame rate, it can do full HD video from the whole sensor (I assume) or one of several 1/16th area sections. Fortunately, or unfortunately as it may be, this is just a prototype and not something Canon has any immediate plans for.
The full press release can be read on Canon’s website. What follows are my thoughts on this sensor and what it could mean for photography down the road.
Small Pixels, Big Picture
From the scant details in the press release, we can make a few estimates about the sensor. For starters, the sensor packs 120 million pixels, resulting in a 13,380 x 9,184 image, in a 29.2mm x 20.2mm area. Simple math tells us, that the pixels are approximately 2-microns across.
2um pixels alone aren’t something new, many of the better performing current generation point and shoots have pixels that size, including Canon’s PowerShot G11 and S90. What is unprecedented, however, is the move to make a sensor with that pixel pitch that big.
Visualizing an image is as fundamental to photography as the cameras, lenses, and film. Even modern technology is unable to diminish the need for your eye as it has for so many other aspects of photography.
Seeing an image, however, is not a simple task. It can be aided, in some ways, the old standard of learning with a 50mm prime, for example. However, that only takes you so far.
However, there’s more to images than just what fits in the frame. In particularly relevance in this case is the way light is reflected from surfaces.
That brings me to, of all things, sunglasses.
I wear sunglasses almost religiously. Fortunately, or unfortunately as it may be, the sunglasses I wear are polarized. As a result, the way I normally perceive the world is fundamentally different from how my camera does.
While I was attaching my polarizer, I found myself wondering if photographers should make it a point not to wear polarized sunglasses.That difference was driven home, again, while I was photographing a scene at the Magic Kingdom. What to me was readily visible and obvious wasn’t to my camera (at least initially), and it took a quick review of the images to realize my folly—chimping isn’t always a bad thing. Worse, I find that this is becoming a more frequent experience for me.
I don’t know, maybe that’s a good path to follow. Using only neutral density sunglasses certainly would keep your normal perception in line with that of your camera. However, polarized sunglasses can be useful tools themselves. The drive to putting polarizing foils in sunglasses is the same as using a polarizing filter on a camera lens, cut glare, intensify color, and increase contrast.
The distance scale has long been a useful tool for photographers since it provides a good deal of information about depth of field and focusing quickly and easily. Though it could be argued that autofocus has diminished the need for a distance scale it continues to grace most mid- and higher-end lenses, though in its current form I have no idea why.
The manual focus distance scale is a thing of beauty. It conveys a ton of information about the state of focus and depth of field on the lens.
For example, from the distance scale shown one can immediately see that this lens is focused at approximately 12 feet. One can quickly estimate the depth of field at any full stop aperture. Finally, if one were to shoot at f/22, this lens would be focused at the hyperfocal distance maximizing depth of field. Oh yes, and the red dot near the center mark, that’s the IR focus point.
That’s quite a lot of info that can be gleaned from some simple markings on a lens.
I’m not a huge fan of HDR images; most of the time they look obviously over processed though when they’re well executed and the processing is understated they come off very nicely. I think the trick to good HDRs is to use the larger capture range simply as a mechanism to get enough data to put together a lower noise image with slightly better shadow detail. Perhaps I’m just a traditionalist, but I think it’s best to think of HDRs as split ND filters without be forced to have a straight line for the split.