A little over a year ago I wrote about replacing my old Eneloops with Amazon Basics High Capacity NiMH batteries. In that article I said I wanted to look at their performance a year later after they’ve been used a bit to see how they were holding up.
Here we are a little over a year later, and I’ve run my batteries through the test cycle on my LaCrosse changer again to determine their capacities.
Unfortunately, in the last year, I haven’t used my flashes nearly as much as I have in the past, and as a result I’ve probably only run the batteries through 2 or 3 charge cycles each. Admittedly I was hoping to have 10 or more cycles on the batteries by now, but that’s just not how things have worked out.
Self discharge stability has been, subjectively, pretty good; though I don’t have hard numbers. I have 4 sets of 4 batteries for my flashes, and they rotate every month or two or so just with normal use. As a result replacement batteries will likely have 2 to 8 months of shelf time before they get used. Generally speaking I don’t see much in the way of loss of capacity in terms of numbers of flashes or recycle rates over that range. There may be some, but it’s not something that’s been especially noticeable.
In my initial testing a year ago, one battery rated slightly worse than the 2500 mAh typical advertised performance (battery 1b, rated 2470 mAh). Most of the rest of the batteries had preformed in excess of 2500 mAh, some even exceeded 2600 mAh.
In this round of testing, there was mostly a minimal loss of capacity across the board. All but one of the batteries that tested in excess of 2600 mAh in the first round of have dropped under that level in this round of testing. Though in all cases they remain above the 2500 mAh typical rating.
The one battery that performed worse than 2500 mAh in the first test (1b), preformed identically in this round.
The table below summarizes the results as they are a year later.
Ultimately, my goals with NiMH cells are two fold. First, I want the better recycle times I get with my flashes from them. Second, I want to cut my costs as much as reasonably possible and to a lesser extent waste too. The break even point with these batteries is around 5 charge cycles. I haven’t quite reached that, but the batteries are still meeting their advertised typical capacities, and well above the minimum capacities, so I expect to get a lot more cycle out of them.
When Nikon announced the D5 and D500 one of my biggest concerns was that marketing had run amuck and started writing ISO checks the engineers simply couldn’t pay out on. Then a friend of mine got a D500 and I got to sit down with him to dial in the noise reduction and sharpening settings in Lightroom to get as much out of the camera as he could. Part of that process — which I’ll be detailing in an upcoming article — is to generate a set of test images at all the ISOs on the camera.
The results, in my opinion at least, are every bit of what I was afraid of; marketing driven inflation.
Let me be clear about one thing though, it’s not all doom and gloom. Noise performance below ISO 102,400 is very good for a DX camera. I would even dare to go so far as to say it’s some of the best DX images I’ve seen to date
If anything, I think it speaks for just how much Nikon has done with the sensor tech in the last 4 years. Four years ago, the D7100 was topping out at ISO 6400 with only 2 stops of expanded ISO to 25,600. Now the D500 hits ISO 51,200 natively.
It’s also worth pointing out that this isn’t exactly apples to apples either, nor is it supposed to be. I’m not trying to persuade you to switch brands or formats here. I’m not, and neither is my Nikon shooting friend with the D500. My interest here is firstly, just to get a feel for the camera in a context relative to what I’m use to, and secondly to talk about the high expanded ISO performance.
At ISOs below say 6400, I find the D500 is a quite compelling camera. From 6400 to 102,400 (H1), it hangs with my 5D mark III more or less; though at this level I really felt that we’re talking about images that are more important to have than to have be good, but it’s certainly acceptable quality.
I’ve written a couple of articles in the past about ideas that I think would be really nice to see camera makers adopt and integrate into their cameras. This time I want to talk about dovetails both as a tripod mounting system and as a way to make a more rigid and potentially weather proof two part camera.
Most experienced photographers are probably familiar with dovetailed mounts with the Arca Swiss derived tripod mounts used by a whole slew of manufacturers; manufacturers like Arca Swiss, Really Right Stuff, Acratech, Markins, and now even a number of the budget Chinese tripod companies like Benro and Induro.
Mechanically dovetails make really solid joints. The angled surfaces draw the to parts together when the joint is tightened which further helps make a tight joint. Plus generally dovetails are somewhat insensitive to needing to be super precise. If the two surfaces aren’t at perfect angles, the force is applied at a point along the surface not across the whole surface, but the joint is still secure.
There are two points here for dove tails, both are related to integrating them into cameras but for two separate objectives.
The Arca Swiss 1.5“ (38 mm) Double Dovetail de facto ”Standard”
There’s a whole heck of a lot of Not Invented Here (NIH) in the camera industry, and as a direct result of that there’s a whole lot of incompatible but similar looking stuff out there. Manfrott’s RC2 quick release system uses a dovetail that looks superficially similar to an Arca Swiss plate but is wider and as a result entirely incompatible. The same can be said for Gitzo’s own proprietary quick release system; looks like an Arca Swiss plate but isn’t — It’s appreciably wider too.
In my article about re-evaluating my position on crop cameras, I mentioned that a 20MP crop sensor was diffraction limited pretty much right out of the gate with the f/5.6 lenses that make crop camera setup appealing to me. When I said that, I knew full well that one of the biggest advantages of digital photography is that software can overcome many optical problems, including diffraction. Not only that, but I also knew full well, that Canon has been advertising that their Digital Photo Professional (DPP) software — the stuff that comes with tier cameras — since version 3.11 can in fact do that.
The big question: How good is diffraction correction?
Canon first introduced diffraction correction in DPP version 3.11, as part of the Digital Lens Optimizer (DLO) module. With it came support for 29 lenses and 25 cameras going back as far as the 30D. The current version of DPP, 4.3.31, the supported lenses have expanded to over 100 EF, EF-S, and EF-M models.
Smartly, Canon’s early efforts were focused on L and EF-S lenses, as those are the two places where diffraction correction makes the most sense. L lenses because professionals demand the highest levels of image quality, and EF-S because the high pixel densities in the crop cameras and generally slow max apertures of EF-S lenses, mean that diffraction is a bigger problem sooner.
The trick, or trouble, is how to effectively test it. The diffraction corrected is part of the DLO process, and that can either be enabled, or not. The DPP UI doesn’t provide a mechanic to enable just diffraction correction, and leave the image otherwise unprocessed.
I should also point out, that my objective here wasn’t to produce an end all set of data, consider this as a preliminary test. I wanted a quick way to see if there was any real utility to using DPP to do at least lens correction processing on images before working on them in Lightroom as I normally would.
The methodology I settled on was to compare across two images. This way I could make an image at a wide aperture where diffraction spot would be much smaller than a pixel, and a second image at a narrow aperture where diffraction would be visible. Of course, this is imperfect, as stopping down also tends to increase sharpness until the diffraction limit is hit too.
For this test I used my EOS 5D mark III, and my EF 100–400mm f/4.5–5.6L IS II USM at 100 mm. Everything was shot from a tripod at ISO 100, 1/125, with a flash providing the illumination. I made a series of images at whole stops from f/4.5 to f/32 then processes them in DPP with only DLO enabled. All the rest of the settings were either unchecked or had their values set to 0.
The resulting images were exported as 16-bit TIFFs in the WideGamut RGB color space and imported into Lightroom where I matched brightnesses and applied my normal sharpening and noise reduction.
As far as matching brightnesses go, it was necessary to push the f/22 image up by a stop [?] to match the brightness of the f/4.5 image. This does mean that there will be more noise in the f/22 image. However, much like it’s necessary to match volumes when testing audio gear as louder will often be perceived as better. I felt it was necessary to match the brightness in these images to insure that the difference in brightness wouldn’t be perceived as a difference in image quality.
As a secondary test I also processed the raw images in Lightroom alone, using my standard processing settings. (click the images to enlarge to 100% magnification)
Last weekend I spent pretty much the bulk of Saturday down at Zoo Miami photographing their 3 Sumatran Tigers: Berani, their 8 year old adult male; Leeloo, their nearly 5 year old adult female; and Satu, their 6 month old male cub. In the process of doing so I made a number of observations, about tigers, the zoo, and even my own thoughts on photography.
Zoo Miami’s three tigers are from the Sumatran subspecies (Panthera tigris sumatrae). Sumatran tigers are the last of the Sunda Islands sub group — the Bali and Javan subspecies are already extinct. Being island tigers, they’ve adapted to their habitats in many ways. For example, they’re the smallest of all the extant tiger subspecies. Their stripe pattern is also denser than other tiger subspecies.
On the other hand, what isn’t so interesting about them is that they swim; which is what one Conservation Teen Scientist was telling people.
Zoo Miami runs a community service/volunteering program for high school kids called Conservation Teen Scientists. Conservation Teen Scientists act as docents of sorts, providing supplementary information to guests that isn’t covered by the signage. At least that’s what I assume they’re supposed to do as there’s little reason for them to stand around the enclosures otherwise.
Back in 2009 I argued that the choice of sensor format should be made based on the value it provides us as photographers and not simply by its size. In the intervening years, I’ve suffered slowly from sensor size creep, going from APS-C, to APS-H, and finally to full frame. While, each of theses steps were driven by my photography, though even I have to admit there’s still a little voice nagging at me – “why I’d ever want to go back to a smaller sensor?”
In a fantastical world where money, size, and weight are never considerations, the sensor size question is trivially answerable. Physics says that a bigger sensor will always produce better images simply because it collects more light. All else not being a factor, a 600 mm f/4 on a full frame sensor is going to produce better images than a 400 mm f/4 on a 1.5x APS-C sensor.
However, in the real world, size, weight, and cost are some of the most important factors.
When I was preparing for my trip to Alaska last year, I knew I was going to be shooting at long distances, pretty much everywhere. That’s just the reality of being on a big ship, combined with the realities of having to make the best of whatever other opportunities presented themselves along the way. In gearing up for this, I seriously started thinking about crop sensor bodies again.
Like most things in photography, everything is a balancing act. Shutter speed versus aperture versus ISO; motion blur versus depth of field versus noise. Or in this case, frame rate, resolution, and reach.
Going with something like the 7D mark II would have given my 100–400 the reach of a 160–640, and the AF system would allow me to stick a 1.4x teleconverter on there and still retain an AF point. And it did all of this with an action shooting 10 FPS.
I like prints, preferably big ones. Moreover, I get a huge amount of satisfaction from seeing one of my photos slide out of even my mid-range printer as a concrete expression of my vision.
That said, while I like the results, I definitely have a love-hate relationship with the process. Moreover, I’ve certainly made some concessions when it comes to printing, that while maybe not ideal are what they are. I’m certainly not convinced that every photographer with some semblance of standards has to run out and buy a $2000 Epson Stylus Pro and print everything on $20 per sq. ft. fine art papers. I know I don’t; my printer is a lowly Canon Pixma Pro 9000 mark II that I got for a song, and I print mostly on Canon’s mid- and high-end papers.
In the 3 and a half years that I’ve been printing my own stuff, I feel almost as clueless about the whole thing as I was when I started. That said, I have learned some things, and this post is detailing some of them.
I have yet to find a printer manufacturer that writes what I consider good documentation.
Consider the differences this:
Depth of Field Preview
The aperture opening (diaphragm) changes only at the moment when the picture is taken. Otherwise, the aperture remains fully open. Therefore, when you look at the scene through the viewfinder or on the LCD monitor, the depth of filed will look narrow.
Press the depth-of-field preview button to stop down the lens to the current aperture setting and check the depth of field (range of acceptable focus).
As compared to this:
Select the Media Type setting that matches the paper you loaded.
The first except (from a Canon camera manual) provides some inkling about what happens when the depth of field preview button and why to use it.
On the other hand, telling me to select the media type that matches the paper I’ve loaded is, obvious to the point of uselessness.
What I need to know to make an informed decision is what the setting does not just a restatement of it’s title. Does it adjust the platen hight? Does it change the ink volume that’s used? If so, what do the various values do the settings translate to?
I’m not singling out Epson here either. Canon’s printer manuals aren’t any different.
In the first couple of articles in this series I’ve looked at some of the more broad user interface points about Lightroom. Now I want to start getting into some more technical aspects, starting with virtual copies.
As I’ve said over and over, one of the biggest features of Lightroom is that it doesn’t directly manipulate the pixel values of the images in the catalog. Every image you see in the interface is made up of two “parts”. First, there’s the original image file, be it raw, jpeg, tiff, psd, or what not, on the disk. Second is the set of instructions that tell Lightroom how to process and display an image; these are stored in the catalog.
One of the advantages of having these two separate parts to an image (the raw file and the recipe) is that it enables a very space efficient way to address multiple alternative versions of an image. You don’t need to store completely separate files on disk, you just need the bookkeeping in the database for two images that point to the single file on disk.
It’s not like the space savings of virtual copies is something to sneeze at. The develop recipe can be as little as a few KB. Even big develop recipes — and I’ll be going into much more depth about the storage of Lightroom’s develop settings in a future article — will almost always be under a few hundred KB. Compare that to the 3–6 MB needed for a 10 MP JPEG, never mind a 50MP raw at 60–70 MB, or the even larger file sizes needed for fully converted RGB tiffs or PSDs.
Make no mistake, Adobe unquestionably got the overall concept of virtual copies right. However, there is one major detail that just drives me up a wall.
In the first couple posts of this series I’ve talked about Lightroom’s UI, and I’m going to probably get back to that in the future, but I also want to look at some of the technology in and around Lightroom. I also want to do this in part so I can talk about a couple of core technical issues in more detail in future articles as well.
There are three core technologies that Adobe has leveraged in Lightroom that make it what it is. Surprisingly some of these are open source products, one of which is absolutely a critical core part of Lightroom. These core technologies are Adobe’s Proprietary Adobe Camera RAW engine, the Lua programming language, and the open source database engine SQLite.
Adobe Camera RAW
It should be well known that Lightroom leverages Adobe’s Camera RAW technology to do all the heavy lifting in Lightroom. And, really, why shouldn’t they. Camera RAW is the same rendering engine that Photoshop, and really all Adobe programs that support reading raw files, use to convert the raw file into a useable bitmap.
As far as raw engines go, camera raw isn’t horrible. It’s been a while since I’ve seen a good comparative between camera raw and the competition. However, the last major over haul, process version 2012, put camera raw on par with Phase One’s Capture One in terms of the ability to distinguish and render fine details, and pretty close with it’s peers in terms of noise reduction.
That said, ACR does lag behind more specialized tools in many respects. Dedicated noise reduction software, like Noise Ninja or Neat Image, can typically do much better noise reduction that ACR can.
I’ve long been thinking about posting on some concepts that I’ve been thinking about for features for cameras. Things that I’ve yet to see actually implemented but I think would be useful to have, why, and maybe look at how they could be implemented.
This time, I want to talk about an idea that’s been floating around in my head for a couple of months now. Especially with Canon and Nikon announcing their latest flagship cameras that have blazing fast frame rates. The idea, as called out in the title is an adaptive continuous drive mode.
In my experience, the vast majority of times that I’ve used continuos drive the high frame rate wasn’t the ends, it was just a means to an end. More specifically, a means to overcoming a limitation of my own. For example, when I’m shooting a bird in flight, or a whale breaching, what I almost always want is a single photo at the optimal point of action, not the whole sequence.
At the same time, I’ve found that there are real, though not real significant, drawbacks to continuous drive modes. Namely, using them creates a lot of images. That in turn uses a lot of storage space, and has an impact of the difficulty of editing them. By far the two hardest sets of images I’ve had to edit were days that made extensive use of continuous release at 10 FPS either with similar subject matter or just from the sheer number of images.
Either there were just a massive amount of images to sort through, or the differences between images become quite small (say a feather position or the position of the nictitating membrane) and the sorting process becomes much more mentally taxing.
There are also the cases where you known that continuous release isn’t needed now, but will be needed at the spur of a moment. For example, if you’re photographing a leopard sitting in a tree waiting for it to jump down. You can’t afford the delay of trying to change the drive mode when the action happens, yet for the most part anything you shoot while it’s sitting still doesn’t need more than a single frame. (Of course, maybe you’re better than I am, and can consistently squeeze off one frame form a 10–14 FPS camera.)
At least that’s my thinking, very rarely do I need all of the FPS my camera can deliver, and when I do, often it’s interspersed between not needing very many FPS at all.
So suppose instead of having to always run the camera at say 10 FPS, the camera will adjust the FPS based on the scene and camera motion.