Points in Focus Photography

In my article about re-evaluating my position on crop cameras, I mentioned that a 20MP crop sensor was diffraction limited pretty much right out of the gate with the f/5.6 lenses that make crop camera setup appealing to me. When I said that, I knew full well that one of the biggest advantages of digital photography is that software can overcome many optical problems, including diffraction. Not only that, but I also knew full well, that Canon has been advertising that their Digital Photo Professional (DPP) software — the stuff that comes with tier cameras — since version 3.11 can in fact do that.

The big question: How good is diffraction correction?

Canon first introduced diffraction correction in DPP version 3.11, as part of the Digital Lens Optimizer (DLO) module. With it came support for 29 lenses and 25 cameras going back as far as the 30D. The current version of DPP, 4.3.31, the supported lenses have expanded to over 100 EF, EF-S, and EF-M models.

Smartly, Canon’s early efforts were focused on L and EF-S lenses, as those are the two places where diffraction correction makes the most sense. L lenses because professionals demand the highest levels of image quality, and EF-S because the high pixel densities in the crop cameras and generally slow max apertures of EF-S lenses, mean that diffraction is a bigger problem sooner.

The trick, or trouble, is how to effectively test it. The diffraction corrected is part of the DLO process, and that can either be enabled, or not. The DPP UI doesn’t provide a mechanic to enable just diffraction correction, and leave the image otherwise unprocessed.

I should also point out, that my objective here wasn’t to produce an end all set of data, consider this as a preliminary test. I wanted a quick way to see if there was any real utility to using DPP to do at least lens correction processing on images before working on them in Lightroom as I normally would.

The methodology I settled on was to compare across two images. This way I could make an image at a wide aperture where diffraction spot would be much smaller than a pixel, and a second image at a narrow aperture where diffraction would be visible. Of course, this is imperfect, as stopping down also tends to increase sharpness until the diffraction limit is hit too.

For this test I used my EOS 5D mark III, and my EF 100–400mm f/4.5–5.6L IS II USM at 100 mm. Everything was shot from a tripod at ISO 100, 1/125, with a flash providing the illumination. I made a series of images at whole stops from f/4.5 to f/32 then processes them in DPP with only DLO enabled. All the rest of the settings were either unchecked or had their values set to 0.

The resulting images were exported as 16-bit TIFFs in the WideGamut RGB color space and imported into Lightroom where I matched brightnesses and applied my normal sharpening and noise reduction.

As far as matching brightnesses go, it was necessary to push the f/22 image up by a stop [?] to match the brightness of the f/4.5 image. This does mean that there will be more noise in the f/22 image. However, much like it’s necessary to match volumes when testing audio gear as louder will often be perceived as better. I felt it was necessary to match the brightness in these images to insure that the difference in brightness wouldn’t be perceived as a difference in image quality.

As a secondary test I also processed the raw images in Lightroom alone, using my standard processing settings. (click the images to enlarge to 100% magnification)

Read the rest of the story »

Sumatran Tigers at Zoo Miami lede

Last weekend I spent pretty much the bulk of Saturday down at Zoo Miami photographing their 3 Sumatran Tigers: Berani, their 8 year old adult male; Leeloo, their nearly 5 year old adult female; and Satu, their 6 month old male cub. In the process of doing so I made a number of observations, about tigers, the zoo, and even my own thoughts on photography.

Zoo Miami’s three tigers are from the Sumatran subspecies (Panthera tigris sumatrae). Sumatran tigers are the last of the Sunda Islands sub group — the Bali and Javan subspecies are already extinct. Being island tigers, they’ve adapted to their habitats in many ways. For example, they’re the smallest of all the extant tiger subspecies. Their stripe pattern is also denser than other tiger subspecies.

On the other hand, what isn’t so interesting about them is that they swim; which is what one Conservation Teen Scientist was telling people.

Zoo Miami runs a community service/volunteering program for high school kids called Conservation Teen Scientists. Conservation Teen Scientists act as docents of sorts, providing supplementary information to guests that isn’t covered by the signage. At least that’s what I assume they’re supposed to do as there’s little reason for them to stand around the enclosures otherwise.

Surprise

Read the rest of the story »

Back in 2009 I argued that the choice of sensor format should be made based on the value it provides us as photographers and not simply by its size. In the intervening years, I’ve suffered slowly from sensor size creep, going from APS-C, to APS-H, and finally to full frame. While, each of theses steps were driven by my photography, though even I have to admit there’s still a little voice nagging at me – “why I’d ever want to go back to a smaller sensor?”

In a fantastical world where money, size, and weight are never considerations, the sensor size question is trivially answerable. Physics says that a bigger sensor will always produce better images simply because it collects more light. All else not being a factor, a 600 mm f/4 on a full frame sensor is going to produce better images than a 400 mm f/4 on a 1.5x APS-C sensor.

However, in the real world, size, weight, and cost are some of the most important factors.

When I was preparing for my trip to Alaska last year, I knew I was going to be shooting at long distances, pretty much everywhere. That’s just the reality of being on a big ship, combined with the realities of having to make the best of whatever other opportunities presented themselves along the way. In gearing up for this, I seriously started thinking about crop sensor bodies again.

Like most things in photography, everything is a balancing act. Shutter speed versus aperture versus ISO; motion blur versus depth of field versus noise. Or in this case, frame rate, resolution, and reach.

Going with something like the 7D mark II would have given my 100–400 the reach of a 160–640, and the AF system would allow me to stick a 1.4x teleconverter on there and still retain an AF point. And it did all of this with an action shooting 10 FPS.

As an aside, there’s alway someone who loves to point out that focal lengths don’t change when you change the size of the sensor. They are of course right, but as I’ve argued before, focal lengths as used by photographers are a proxy for angle of view, and angle of view is defined by the frame size.

Read the rest of the story »

Adventures in Printing: 5 Years on and Still Lost lede

I like prints, preferably big ones. Moreover, I get a huge amount of satisfaction from seeing one of my photos slide out of even my mid-range printer as a concrete expression of my vision.

That said, while I like the results, I definitely have a love-hate relationship with the process. Moreover, I’ve certainly made some concessions when it comes to printing, that while maybe not ideal are what they are. I’m certainly not convinced that every photographer with some semblance of standards has to run out and buy a $2000 Epson Stylus Pro and print everything on $20 per sq. ft. fine art papers. I know I don’t; my printer is a lowly Canon Pixma Pro 9000 mark II that I got for a song, and I print mostly on Canon’s mid- and high-end papers[1].

In the 3 and a half years that I’ve been printing my own stuff, I feel almost as clueless about the whole thing as I was when I started. That said, I have learned some things, and this post is detailing some of them.

Documentation

I have yet to find a printer manufacturer that writes what I consider good documentation.

Consider the differences this:

Depth of Field Preview
The aperture opening (diaphragm) changes only at the moment when the picture is taken. Otherwise, the aperture remains fully open. Therefore, when you look at the scene through the viewfinder or on the LCD monitor, the depth of filed will look narrow.
Press the depth-of-field preview button to stop down the lens to the current aperture setting and check the depth of field (range of acceptable focus).

As compared to this:

Select the Media Type setting that matches the paper you loaded.

The first except (from a Canon camera manual) provides some inkling about what happens when the depth of field preview button and why to use it.

On the other hand, telling me to select the media type that matches the paper I’ve loaded is, obvious to the point of uselessness.

What I need to know to make an informed decision is what the setting does not just a restatement of it’s title. Does it adjust the platen hight? Does it change the ink volume that’s used? If so, what do the various values do the settings translate to?

I’m not singling out Epson here either. Canon’s printer manuals aren’t any different.

Read the rest of the story »

Lightroom 8 Years Later: A Critical Look at Virtual Copies lede

In the first couple of articles in this series I’ve looked at some of the more broad user interface points about Lightroom. Now I want to start getting into some more technical aspects, starting with virtual copies.

As I’ve said over and over, one of the biggest features of Lightroom is that it doesn’t directly manipulate the pixel values of the images in the catalog. Every image you see in the interface is made up of two “parts”. First, there’s the original image file, be it raw, jpeg, tiff, psd, or what not, on the disk. Second is the set of instructions that tell Lightroom how to process and display an image; these are stored in the catalog.

One of the advantages of having these two separate parts to an image (the raw file and the recipe) is that it enables a very space efficient way to address multiple alternative versions of an image. You don’t need to store completely separate files on disk, you just need the bookkeeping in the database for two images that point to the single file on disk.

It’s not like the space savings of virtual copies is something to sneeze at. The develop recipe can be as little as a few KB. Even big develop recipes — and I’ll be going into much more depth about the storage of Lightroom’s develop settings in a future article — will almost always be under a few hundred KB. Compare that to the 3–6 MB needed for a 10 MP JPEG, never mind a 50MP raw at 60–70 MB, or the even larger file sizes needed for fully converted RGB tiffs or PSDs.

Make no mistake, Adobe unquestionably got the overall concept of virtual copies right. However, there is one major detail that just drives me up a wall.

Read the rest of the story »

Lightroom 8 Years Later: Core Technology lede

In the first couple posts of this series I’ve talked about Lightroom’s UI, and I’m going to probably get back to that in the future, but I also want to look at some of the technology in and around Lightroom. I also want to do this in part so I can talk about a couple of core technical issues in more detail in future articles as well.

There are three core technologies that Adobe has leveraged in Lightroom that make it what it is. Surprisingly some of these are open source products, one of which is absolutely a critical core part of Lightroom. These core technologies are Adobe’s Proprietary Adobe Camera RAW engine, the Lua programming language, and the open source database engine SQLite.

Adobe Camera RAW

It should be well known that Lightroom leverages Adobe’s Camera RAW technology to do all the heavy lifting in Lightroom. And, really, why shouldn’t they. Camera RAW is the same rendering engine that Photoshop, and really all Adobe programs that support reading raw files, use to convert the raw file into a useable bitmap.

As far as raw engines go, camera raw isn’t horrible. It’s been a while since I’ve seen a good comparative between camera raw and the competition. However, the last major over haul, process version 2012, put camera raw on par with Phase One’s Capture One in terms of the ability to distinguish and render fine details, and pretty close with it’s peers in terms of noise reduction.

That said, ACR does lag behind more specialized tools in many respects. Dedicated noise reduction software, like Noise Ninja or Neat Image, can typically do much better noise reduction that ACR can.

Read the rest of the story »

I’ve long been thinking about posting on some concepts that I’ve been thinking about for features for cameras. Things that I’ve yet to see actually implemented but I think would be useful to have, why, and maybe look at how they could be implemented.

This time, I want to talk about an idea that’s been floating around in my head for a couple of months now. Especially with Canon and Nikon announcing their latest flagship cameras that have blazing fast frame rates. The idea, as called out in the title is an adaptive continuous drive mode.

In my experience, the vast majority of times that I’ve used continuos drive the high frame rate wasn’t the ends, it was just a means to an end. More specifically, a means to overcoming a limitation of my own. For example, when I’m shooting a bird in flight, or a whale breaching, what I almost always want is a single photo at the optimal point of action, not the whole sequence.

At the same time, I’ve found that there are real, though not real significant, drawbacks to continuous drive modes. Namely, using them creates a lot of images. That in turn uses a lot of storage space, and has an impact of the difficulty of editing them. By far the two hardest sets of images I’ve had to edit were days that made extensive use of continuous release at 10 FPS either with similar subject matter or just from the sheer number of images.

Either there were just a massive amount of images to sort through, or the differences between images become quite small (say a feather position or the position of the nictitating membrane) and the sorting process becomes much more mentally taxing.

There are also the cases where you known that continuous release isn’t needed now, but will be needed at the spur of a moment. For example, if you’re photographing a leopard sitting in a tree waiting for it to jump down. You can’t afford the delay of trying to change the drive mode when the action happens, yet for the most part anything you shoot while it’s sitting still doesn’t need more than a single frame. (Of course, maybe you’re better than I am, and can consistently squeeze off one frame form a 10–14 FPS camera.)

At least that’s my thinking, very rarely do I need all of the FPS my camera can deliver, and when I do, often it’s interspersed between not needing very many FPS at all.

So suppose instead of having to always run the camera at say 10 FPS, the camera will adjust the FPS based on the scene and camera motion.

Read the rest of the story »

Lightroom 8 Years Later – Asset Management and the Library Module lede

The Library module is easily the core of Lightroom’s asset management functionality, and as a side effect of that very much the core of Lightroom as a whole. In this part of Lightroom 8 Years Later, I’m going to look at the Library module in mode detail.

The big picture aspect of the Library module is that it’s a thumbnail and image viewer with supporting aspects for importing, exporting (though it’s not limited to the library alone), and managing the images that are in your collection. Over Lightroom’s history, the Library module has remained mostly unchanged. There certainly have been some refinements and improvements, but the organization and features hasn’t radically changed.

Back in part two I wrote about Lightroom’s user interface, specifically the penalized organization and structure. The Library module, like all Lightroom modules, uses that system at least as best as it reasonably can.

Read the rest of the story »

An Argument for more Open Camera Firmware lede

So today I want to talk about the software that runs our cameras. More specifically than that, I want to talk about why I think that the camera companies should stop sitting on that software and either make it more open and accessible or better yet, make the source code for it available to interested photographers and developers to extend and mess with.

I would argue that there are a whole slew of reasons that this is a good thing for us photographers and the camera companies. Probably more than I can reasonably cover in this post while keeping its length reasonable. But that’s not going to stop me from trying.

Security

Let me start with the big scary fear-mongering headline grabbing keyword; security.

No wait, stop laughing. I’m serious.

Cameras are increasingly becoming connected, network enabled, devices. And with that comes dealing with a whole myriad of security issues that none of the camera makers have really had to consider or think about in the past.

While cameras can stand alone without the connectivity, and hopefully that will remain the case for a long time to come, that connectivity is increasingly becoming a selling point; even for pro level cameras.

Read the rest of the story »

Thoughts on the Nikon D5 ledeImage: Nikon USA

So far this year has been a great year for new gear from Canon and Nikon. I’ve written about the two Canon announcements, but I’ve been awfully quiet on the Nikon ones, the D5 and D500.

Since the D3, Nikon’s been turning out absolutely solid pro cameras that offer amazing image quality, functionality and features. The D5 carries on that tradition in virtual all respects. The chassis is the same solid all magnesium weather sealed construction, and the sensor is almost certainly going to be par for Nikon’s course in terms of excellent dynamic range and noise performance. In fact, so far as I can tell, in virtually every aspect the D5 builds on the D4s in every respect.

That said, I don’t write these things to just sing praises of cameras — even the ones that I really like. I write these things because there’s almost always stuff that’s been done really well and stuff that can be done a lot better.

UI Improvements

Nikon has made two major improvements to the UI that I really think are good moves.

First, they’ve added an ISO button to the top plate right behind the shutter release. In my opinion, they should have followed Canon’s lead and done this soon after the conversion to digital, but better late than never.

Second, they’ve added two more user programmable buttons to the camera (one more to the front and one to the rear). I like user customizability in a camera. Sure I may have my camera set up different enough that someone else can’t just pick it up and shoot with it, but I don’t really care if it makes me more efficient and productive.

The only real shortcoming to Nikon’s functions buttons is that they don’t duplicate them across the bottom of the lens mount, like Canon has been doing with the 1DXes. I would have loved to see Nikon again follow Canon’s lead here and duplicate the front function buttons along the bottom of the lens mount so they were equally usable from the portrait and landscape grips.

Read the rest of the story »

Our cookie and privacy policy. Dismiss