Points in Focus Photography

Since Sony’s acquisition of Minolta, they’ve been throwing cameras at the market to see what sticks. This year in a lot of ways is no real exception. It’s safe to say that off all the camera companies, Sony is clearly the most willing to be adventurous with their products. A full frame fixed lens compact, who would have thought?

Read the rest of the story »

Sony tried the sub $2000 entry level camera routine some years ago in an attempt to garner some market share; it didn’t work then. Some would suggest it was premature, I would suggest the problem was that it was a Sony, and as camera companies go, Sony didn’t, and still doesn’t really have what matters, glass. Which brings us to the end of 2012, and Canon and Nikon are both releasing $2100 entry level full frame cameras.

I’ve grouped the 6D and D600 together because I’m not going to go into any great detail with respect to looking at their ergonomics. In a lot of ways they are very much clones of existing cameras with minor changes. The D600 is largely a D7000 with a bigger sensor. Likewise, the 6D is in many ways a 60D with out the flash and a bigger sensor. Both bodies clearly look like solid entry-level bodies for a person wanting a full frame body but not to spend $3000-3500 on it. They also look like they’d make pretty solid backup bodies for a 5D-3 or D800 user who wants to keep the frame size, but not spend quite as much.

Read the rest of the story »

Though it’s certainly not the first thing that comes to mind when thinking about the rocky road to video, the post-production workflow is something that’s become an annoying thorn in my side even. I think part of this is organizational in nature and part is related to the way the software manages things.

I think, for me at least, there are two major factors in my discomfort with my video workflow. The first stems from the fundamentally different relationship between video clips, projects, and my portfolio and that of stills and my portfolio. A side effect of this is a fundamentally different way or organizing is that most of the video post-production software operates on a per-project level.

Okay, let me step back a second. I’ve come to video from still photography where tools like Adobe’s Photoshop Lightroom have provided a really simple, yet incredibly powerful workflow form camera to client. Everything I shoot goes into one big Lightroom catalog; from there things can be grouped dynamically or statically based on my needs. To me at least, this makes sense, regardless of whether or not the images are part of a specific project, or just random images. Either way, overall they’re more a part of my portfolio as a whole first than the individual project.

The most obvious way I see this, and admittedly it’s only a minor annoyance, is that every time I go to work on a new project, say a lens review, I have to create a folder for it, then entirely new projects in Prelude, Premiere Pro, and potentially After Effects, then go about my way. Admittedly, I could just have one big lens test project, and make everything separate but unconnected sequences and go that way, but that just seems off as well.

Read the rest of the story »

When it comes to getting my head around video, I’m seriously reminded of Monty Python’s Spanish Inquisition sketch. Every time I think I’ve sorted something out or made progress, I find yet another thing, or three, that leave me feeling totally lost again.

I come to the world of video as a still photographer. In fact, it wasn’t until early this year, when Canon announced the 5D mark 3 that I even really considered shooting or working with video in any serious way.

When it comes to still photography, I feel as though I at least have a handle on what I’m doing. I may not always be as successful at what I set out to do as I’d like, and there are certainly many areas I have a lot of learning to do; but at least when it comes to being behind the camera, on the computer, or composing a scene, I don’t feel like a fish out of water.

My limited experience with video so far is that virtually none of what I thought I had a handle on applies. In many ways, about the only thing I find video and still photography have in common is that they both use cameras and they both capture light, outside of that it’s a very different can of worms.

To make matters worse, not only is it different, there’s a whole lot more to deal with as video has a number of extra dimensions that simply aren’t an issue or consideration for the still photographer. If taking a photo is a 6 variable problem (depth of field, shutter speed, exposure, composition, perspective/focal length, and lighting), video adds another dozen or so variables on top of that. Things that are simply not a consideration, like sound, or a consideration in one way, like camera motion, are first-rate considerations, and not always in obvious ways.

I’m not necessarily looking to become a Hollywood cinematographer, or start producing my own indy shorts. In fact, I haven’t really developed a real direction for where I want to go with video beyond using it as a complement to stills where video tells the story better or more concisely than stills or text does. At a minimum, to me this means at least having a functional understanding of how to light, shoot, edit and process video content. Moreover, not having the budget for a production company, means being able to do it all myself.

This brings me to what I’m trying to do with this series.

I’m sure I’m not the only photographer who’s decided to try their hand at video, and I’m not at all convinced I’m going to be able to make a successful go of this. Nevertheless, I thought it would be worth sharing some of the hurdles, thoughts, and solutions I’ve come across along my rocky road to video.

Similar to crepuscular rays, but different. Anticrepuscular rays are similar to crepuscular rays—shafts of light reflected by atmospheric dust separated by bands of shadow blocked by a cloud or other irregular object—but due to perspective appear to converge at the anti-solar point (point opposite the sun). These were caused by the sun setting behind a mid-sized storm cloud, and while not nearly as dramatic as they can be, it was, the first time I’ve ever managed to see let alone capture them.

Well, here we go again, revisiting my previous posts AF drive speed tests with a different camera, this time an EOS 40D.

Like the last time, timing was approximated by filming the drive sequence at NTSC 60 FPS and counting the frames from the first frame before motion occurred until the first frame after the motion stopped.

Lens Dark Conditions Light Conditions
Inf→Macro Slew Total Time Inf→Macro Slew Total Time
Canon EF-S 10-22mm f/3.5-4.5 USM 0.23 0.90 0.23 0.48
Canon EF 16-35mm f/2.8L II USM 0.27 0.94 0.26 0.54
Canon EF 24-70mm f/2.8L USM 0.81 2.14 0.41 0.85
Canon EF 24-105mm f/4L IS USM 0.34 0.98 0.34 0.68
Canon EF 70-200mm f/4L IS USM 1.05 2.51 0.38 0.77

Like the last time I haven’t really had the time to dig into the results and chew on what exactly is going on. That said, there were some very interesting points that became painfully obvious very quickly with respect to the AF systems.

In my initial tests with the 1D3, the light used for the AF light tests was a 100W incandescent lamp behind a Smith Victor DP10 diffuser. Only with the 16-35/2.8 was it necessary to provide a second diffusion layer—a piece of velum was used—to stop the camera from momentarily pausing while hunting. With the 40D it was necessary to double diffuse the front light for the entirety of the tests. Moreover, a sheet of velum was insufficient to stop the mid-hunt pauses, and it was necessary to use a sheet of opaque white acrylic.

I’m not entirely sure yet what this is telling me about the AF sensors, as it’s certainly not a product of the lenses. My only current working theory is that the 40D is momentarily seeing some sort of pattern in the out of focus image that is causing it to pause momentarily to re-evaluate the situation and then move on.

For the moment, I’m not really willing to start drawing any conclusions from the data, and there are quite a few more tests I’d like to try. However, didn’t want to sit on the data and possibly lose it either. So there it is, part 2 of my rather limited look at lens focus slew performance during autofocus “hunting” operations.

A bit of back story here, since this is going to be more of a blog post and not a technical article. When Imaging Resources previewed the Canon EOS-M they complained that the AF speed (using the hybrid phase and contrast detect sensor) was slow, comparable as they put it to the EF 40mm f/2.8 STM they were using on the Rebel T4i they were testing. They gave some numbers, however incomplete, as saying the STM lenses to 1.2-1.7 seconds to focus and lock on the T4i/EOS-M while other systems were doing contrast based focusing in 1/4 of that time.

Of course that got me wondering how fast the 40mm f/2.8 STM actually was at focusing. The story from Canon has been that the STM lenses are optimized for smooth, and silent, focusing when shooting video. Though it’s quite important, part of the focus speed is the lens not just the camera, which raises the question; just how fast are the STM lenses compared to other Canon lenses.

Since I don’t have a 40mm STM to test with, I’ve started with the following bit from SLR Gear.

In practice the lens is indeed much quieter to focus than previous lenses, and is still very quick to focus, taking about one second to go from close focus to infinity. – SLR Gear

Now SLR gear isn’t completely transparent on the conditions they tested that under, so there’s a lot left up for assumption. In coming up with my numbers, I elected to simply test the hunting speed from infinity to close focus and solely the lens drive not the complete focus operation. That is, the time from when the lens starts moving to when the lens stops moving is all that I’m measuring.

Secondly since behavior will change with light, as the sensor takes longer to integrate sufficient signal in the dark I’ve provided 2 times. The dark slew time is taken with a lens cap on the lens and represents a situation where the camera is operating below the bottom extreme of the camera’s AF capabilities.

The light number has a diffused 100W tungsten lamp placed within the minimum focus distance (double diffused in the case of the 16-35, due to the focus distance and DoF being such that the camera would lock on even with the light almost touching the lens).

The test camera was an EOS 1D mark 3 using the central AF point only. Timing was taken by counting frames form 720p60 video and averaging over 3 focus runs. While 60 FPS video doesn’t result in the best timing accuracy, it is good enough to get down to about 17ms accuracy, so these numbers are good within ±0.01 ms.

Lens 1D-3 Slew Light
(seconds)
1D-3 Slew Dark
(seconds)
Canon EF 16-35mm f/2.8L II USM 0.27 0.27
Canon EF 24-70mm f/2.8L USM 0.41 0.81
Canon EF 24-105mm f/4L IS USM 0.33 0.34
Canon EF 70-200mm f/4L IS USM 0.72 0.98

One thing that I did find to be somewhat interesting, the 16-35 and 24-105 didn’t slow down appreciable under dark conditions, instead they paused longer at the mid point (macro in the case of my tests) before returning to the start position.

Of course what this means with respect to SLR Gear’s 1 second comment on the 40mm f/2.8 STM, well, your guess is as good as mine. If the lens was capped, 1 second isn’t terribly fast, but it’s not all that  bad either. On the other hand, if it was under brighter conditions, a 1-second seek time is not particularly performant compared to the lenses I have handy to run these tests on and when used on pro body.

In the mean time, I think I’m going to repeat the tests on a mid-tier (40D) body and an entry level body (400D) and see how much of a difference shows up with the 1D-3 I used for this round of tests. I’m also working to try and come up with a more useful “real world” lock on test, but I’m not exactly sure how that’s going to work yet.

I’ve been really scratching my head over this one, for the past couple of weeks (it seems, it could have been longer), I’ve been seeing my images on pure white backgrounds come out with very light gray backgrounds. I had thought it was an issue with ImageMagick, the software that re-samples things I post here. Turns out, it’s actually Lightroom 4.1 that’s doing it (at least the 64-bit Windows version).

To reproduce, take an image with a pure white (i.e. 255 255 255) background and export it, doesn’t matter what color space, once with a water mark, and once without. Then open the two in Photoshop and measure the whites with the eye dropper. Of course if you have a calibrated display the difference should be obvious right away if they’re on a pure white background.

The following two images illustrate the difference including a PS sample of the whites. Develop shows the color as being 100% 100% 100%, they were both exported using the same preset only the no-watermark version has the watermark turned off. For that matter, the watermark is plain text generated inside of LR, not an imported file (though that does the same thing).

Click more to see the images.

Read the rest of the story »

A somewhat uninspiring 4th of July compared to last years, at least in terms of having something other than fireworks in the images. That said, unlike last years location, I managed to get 2 shows (the large fireworks are form the Fort Lauderdale Yacht Club’s presentation, and the small fireworks are the City of Fort Lauderdale’s presentation on the beach) for the price of one.

As an aside, the Fort Lauderdale Yacht Club’s finale was photographically disappointing again this year, if not worse than last. Lots of bangs and bright flashes, no really impressive high flying chrysanthemums and color. Oh well, maybe next year.

Over the last couple of years, many countries have been moving outlaw traditional incandescent light sources as inefficient to be replaced by LED and CFL sources. From a basic lighting needs these source largely serve their purpose. The human mind is a wonderful manipulator of perceived color even when the spectra of the source is discontinuous. Unfortunately, cameras be them digital or film aren’t eyes, and don’t have a marvelous brain behind them to munge things into perspective.\

The Academy of Motion Picture Ars and Sciences has been studying the problem as it effects cinematographers (and photographers), their work can be found here. It’s well wroth going though the entirety of the presentation videos, even if critical accurate color rendition isn’t a necessity in your work.

If there’s one thing I’d draw some attention to. The CRI used to indicate how “good” a CFL lamp is, is calibrated for the human eye, not the camera. A high CRI lamp will likely look very much like the incandescent/black-body source it’s trying to mimic; but that’s no guarantee that the camera will see things the same way.

Food for thought at least.

Our cookie and privacy policy. Dismiss