Points in Focus Photography

Last week I was going through my RSS feeds when I came across this article on DIY Photography where the author was ranting about losing business because a client wanted the job shot with a full frame camera and not a “crop” camera. While I have my own thoughts on that issue, it reminded me of an article I started writing a couple of years back on crop factors and formats.

It’s long been a thing in digital photography to talk about crop sensor cameras. APS-C, DX, micro-4/3rds, you know the ones. Personally, I’ve about had it with that; it needs to stop. There was a time where there was some truth to some of those formats being crops, but that time is largely gone. As far as I’m concerned, none of the so-called crop formats are really crops of anything anymore, but rather are formats on their own that should be treated that way. They may have been in the past, 10 or more years ago, but they aren’t any longer.

For me the line in the sand is simple. I consider a format is a crop when it’s expected to exist in a system without anything being designed to accommodate it specifically. Likewise, it becomes a format on it’s own when its eccentricities are catered to specifically (whether by design or evolution), and definitely, when it develops conventions that are broadly conformed to across manufacturers. It’s not that something shares a lens mount that makes it crop or not; it’s whether lenses are designed specifically for that size frame, which draws the distinction in my mind.

APS-C has most certainly crossed that line. It made the jump ages ago, but nobody was apparently paying attention. It made the jump, the moment the camera makers started making lenses specifically for that frame size instead of leaving photographers stuck with using full frame lenses that never really covered the right angles of view or simply left them lacking coverage.

Read the rest of the story »

No camera is perfect, and no system is as well. Canon, like most camera manufacturers, does a good job of pushing forward on all the myriad of fronts that cameras require advancement on each successive generation. That said, the devil is in the details, or so the saying goes. As refined as Canon’s system is, there are still loads of little rough edges that could, and should, be smoothed over. These are 3 that I constantly seem to come back to wishing were smoothed over.

Better High-end Battery Compatibility

Put it this way, I should be able to slap my LP-E4 batteries form my 1D into the battery grip for my 5D or 7D and go shooting.

Yes, it’s a small gripe, though if you’ve read any of my material in the past, including my review of the BG-E11 battery grip, you’ll find battery compatibility is a long running pet peeve of mine. I’m sure that Canon has never lost a Camera sale because the battery in the 1D can’t be used in a gripped 7D/5D. Likewise, I’m sure nobody is dying because their $8000 camera and their $3000 camera can’t share batteries.

That said, why shouldn’t compatibility be the standard design? It’s just good system design, and it rounds over yet another little edge that makes our lives as photographers just a shade easier.

Ultimately it comes down to compatible batteries allow me to carry less crap when I travel. For example, as it stands now, if I’m traveling for any length of time I’m carrying:

  • a charger for my 1D
  • a charger for my 5D
  • a surge strip to plug them all in so I don’t lose any of them
  • 6 camera batteries—a primary and backup for my 1D and a primary and backup set for my gripped 5D

Give me my desired battery compatibility and I need to pack and carry half as much carp. Since I know I can get buy on a single LP-E4 a day, I can carry 1 charger and 3 batteries instead of the list above. What’s not to like about that.

Besides, it’s not as if it’s bad for Canon’s bottom line either. At best, it’s a wash; I’m buying fewer more expensive batteries instead of more cheaper ones. Nor does it seem like it should be a big engineering problem. The semi-pro camera’s already have to regulate and step down the 7.2v battery voltage to the 1.5-3.3V the electronics use. Heck it’s likely the regulators can already deal with the higher voltage from the pro batteries as it stands. Further, the problem of designing a compatible battery grip, isn’t any more difficult than designing incompatible battery grip.

Read the rest of the story »

ColorManagement

So I was reminded yet again recently why I so much despise color management, especially in Windows. This is half rant, half looking for answers, and half-random things I’ve noticed about color management. Not that I was actively looking for any of this mind you. In the weeks since I wrote about old colorimeters and modern displays I’ve run across a couple more amusing but notable tidbits of color management fun.

Color management, at least the technical stuff behind it, is something to me of an arcane art. I don’t at all understand what’s going on, so don’t look at me to explain anything here or grant any great insight. In fact, while I’m quite comfortable profiling my displays and getting acceptable results, I still dread my quarterly re-profiling runs.

Color management in Windows is, at best, a tragically chaotic affair. The OS doesn’t do color management globally, applications have to opt into color management, and it seems only certain parts of them get managed or there’s an interaction between display profile and what’s rendered that completely alludes me.

Worse, Microsoft didn’t bother to implement color management in most OS related areas—especially annoying to me is the desktop background. Further, where they did implement color management it’s not always complete. For example, the Windows Photo viewer is actually color managed, but if you use an ICC version 4-color profile, it loses the plot.

Which raises the question, why bother with ICC v4 profiles in the first place?

Not to put too fine of a point on it, the give you better color accuracy.

I haven’t done anything extensive with this, but I was curious why my SpectraView display kept measuring so poorly in terms of Delta E (ΔE). For those that aren’t familiar with it, ΔE is the difference between colors—typically with displays it’s the difference between the color displayed and the color that should have been displayed. The idea is that the lower the ΔE, the closer the colors are.

For a display, a ΔE of 1 is good; the vast majority of people won’t be able to see a difference between the color that is and the color it’s supposed to be. Professional color critical displays will usually do better than that. Most consumer displays are a lot worse.

On my desk I have 3 displays a Dell E248WFP, which is a cheap TN panel consumer level display; a Dell Ultrasharp U2408WFP, which is a respectable semi-high end PVA panel; and an NEC PA241W-BK, a color critical IPS display. The cheap TN display, after calibration has a ΔE around 2.3; the Ultrasharp, around 1.2; and here’s where the fun with profiles shows up.

Using an ICC Version 2 table based profile, my SpectraView II measures with a ΔE of 1.1. However, switching around to a matrix base version 4 ICC profile, drops the ΔE to 0.7.

So I’ve switched back to V4 matrix profiles, and solved the somewhat confusing question of why my display suddenly seemed to tank sometime earlier this year.

Of course, one might be asking why I switched from the i1 recommended V4 matrix profile back to the V2 table profile. The answer, unsurprisingly, is Windows; specifically the Windows Photo Viewer, which is incapable of using V4 matrix profiles. Of course, the solution to that is just to use something other than Windows Photo Viewer to preview images, like say IrfanView.

As I said, I hate color management.

Converting and Old CF dog to SD cards

I’ve never been a huge fan of SD cards. I can point to reasons, some sound some maybe not so much, but given the choice, I’d much prefer to use CF to SD given the choice. Maybe this stems from some initial bias; my first digital cameras, even the crappy point and shoots, were all compact flash based. So was my first SLR, and all my current SLRs.

It’s not totally an irrational bias; there are good reasons to prefer compact flash. For starters the cards are bigger that SD cards. Bigger cards are easier to find if you drop them, and easier to handle with gloves on. CF cards are generally faster, even at comparable sizes and rated speeds.

Actually, let me talk about speed for a moment. That too has long been one of my problems with SD cards. They’re slow, or at least that’s always been my perception of them. I was pleasantly surprised when I benchmarked a 32GB Lexar 400X card to find it wasn’t atrociously slow. No, it wasn’t as fast as the 32GB 400X UDMA compact flash cards I normally use, but its performance wasn’t radically worse either (6.5% slower at writes, 12.5% slower reads). The key difference, write speed, is so meaningless it isn’t even worth whining about.

Of course, SD cards have one huge advantage over compact flash, cost. Where a 1000x UDMA7 CF card can run upwards of the cost of an entry level SLR, the reasonably sized (8-16GB), reasonably fast (400x) SD cards I use are in the $10-30 range. Why put the wear and tear—and yes the NAND flash memory in flash cards wears out eventually, and not on 1000 year scales, but more like several 1000 write cycles scales—on expensive CF cards when their advantages aren’t needed?

Until earlier this year, I was pretty die-hard against SD cards in cameras, at least at the prosumer and pro level. I’m still not the biggest fan on of them, at the same time, other than when shooting video or when I know I’m going to want to shoot larger bursts at high FPS, I’m shooting almost everything on cheap, comparatively disposable—at least I won’t be crying when one dies—SD cards now.

Somewhat amusingly, I’ve also found a couple of things I like about them over CF cards as well. For starters, the lack of pins means I’m much less concerned about bending pins in my camera’s card slot or card reader. It’s never happened to me in the past, but it’s always been a concern in the bag of my mind. I’m also increasingly becoming a fan of the push-in-and-it-pops-out release mechanism.

Going forward, the next apparent card format XQD, adopts some of the perks I like from SD and combines it with a bit bigger card and faster performance form CF. That said, it looks like it’s going to be a long slow transition to XQD, if it happens at all. As it stands, only the Nikon D4 supports it and Lexar and SanDisk waited until well after the D4 was released to announce they would begin making XQD cards. All told, it’s been more widely adopted than CFast was, but 1 camera still isn’t much of a market.

In the end, what I think I’m trying to say, at least to the guys like me who’ve always shied away from SD cards, is they aren’t all that bad.

If you’re still using an old Spyder 3 LT or Pro, or really any colorimeter of that age or older, and are looking at upgrading to new wide gamut or LED backlit displays, the odds are good your trusty old colorimeter won’t do so good.

The simple reality is that the Spyder3 was designed at a time when wide gamut cold-cathode fluorescent (CCFL) and LED backlights simply weren’t considerations. Today those two technologies are becoming increasingly prevalent in displays us as photographers are likely to encounter in looking at new dispalys

My Spyder 3 met the end of its utility almost a year ago when I replaced my aging and dying Dell 2408WPF—a fairly standard gamut PVA display—with an NED PA241W-BK—a SpectraView series wide gamut IPS displays. The Spyder3 simply couldn’t calibrate and profile the display properly.

The limitations were reconfirmed to me again just recently, when I went to help a friend setup a new LED backlit standard gamut PVA panel as a secondary display on his machine. He’s also a Spyder 3 user, as he’s had good success with Spyder products for years with his older CCFL displays. However, getting a good profile out of this LED backlit display has been next to impossible.

So what’s the deal?

Well a big part of this comes down to how the color calibration hardware is built. The relatively inexpensive colorimeters use multiple color filters over more or less standard photo detectors to determine what color they’re actually looking at. Since the hardware/software knows the pass-bands for the various color filters, it can compute what the actual color it’s being shown is based on the response of the various detectors.

For that to work right, the hardware and software have to be designed with the perspective light sources and gamuts in mind. Meaning they had to plan on the spectral output of a wide-gamut CCFL or LED backlight when they were building the hardware.

This is where the Spyder 3 falls down. Back in the 2007 when the Spyder 3 was coming to market, wide gamut CCFLs were the preview of only the most expensive professional displays, and LED backlights weren’t all that common. Of course, in the intervening years, LED backlights have become much more prevalent, and wide gamut CCFLs are in more and more mid-tier photographer displays.

The moral of the story here is simple, if you’re looking to replace or upgrade your displays in the current market (2013), and are considering any of the wide gamut IPS or LED backlit LCD displays, and are still using a Spyder 3 LT or Spyder 3 Pro, you should probably plan on upgrading that as well.

Canon Rumors has posted a couple of rumors lately about the possibility of a 75- or 80-megapixel camera from Canon that might in fact use a multi-level Foveon style sensor. I can’t comment on the veracity of the rumors, but it’s a strategy I’ve always liked for a number of reasons.

Multilayer sensors do a number of things that improve efficiency, mostly by eliminating cruft that’s necessary for Bayer pattern sensors. First they can lose the low pass filter, as it’s largely unnecessary because the sensor samples all colors at all points and moiré isn’t a significant concern. More importantly, the efficiency sapping color filters are gone.

The real benefit to me is the flexibility that a multi-layer sensor brings to the camera.

For those interested in the absolute best image quality it gives you a moiré free image that records the full color information at every point. Moreover, since the sensor doesn’t need a low-pass filter and isn’t doing using a Bayer pattern you get the full special resolution out of the sensor, not just a significant fraction of it.

For those who need frame rate over absolute maximum quality, you can read out a multilayer sensor as if it was a Bayer sensor and suddenly you’re looking at 1/3 of the data to process and therefore 3x higher frame rates.

Finally, for people who want to shoot monochrome, you can treat the sensor as a true monochrome sensor by summing all three of the stacked photo sites into a single monochrome output. While the market isn’t huge, there’s certainly enough of a market that Leica brought out a monochrome M, and there have been a couple of monochrome medium format backs. Though the market isn’t huge for such a feature, the amount of work necessary to implement it isn’t that big either.

With that said, lets jump back to the wild speculation on Canon Rumors rumored 75MP multilayer SLR from Canon.

Foveon, and Sigma, have always characterized a pixel as a single color photo site, even though 3 of those photo sites make up a single 3-color pixel—picture element—in the resulting picture from their X3 cameras. That’s certainly a fair way to look at it, considering that a pixel on every other digital camera is only a single, single color, photo site too.

If we assume that Canon will run with that same definition, since it makes a nice big marketable number, then the rumored 75MP camera would only have a spatial resolution of 25MP and images would be about 6124 x 4082 pixels.

A 25MP special resolution sensor actually sounds a whole lot more reasonable to me than a 75MP one does. The pixel pitch would be around 5.9 microns, instead of 3.4 microns. That’s well within Canon’s comfort zone for manufacturing, slotting in right between the 5D mark 3’s 6.25μm pixels and the 40D’s 5.7μm pixels size, and well above the 4.3μm pixels in their APS-C offerings.

From the sensor standpoint, you have comparatively big pixels, at least compared to a D800 or Canon’s APS-C cameras, which are simultaneously unencumbered by efficiency eating color and low-pass filters. That combination has the potential to be quite impressive in terms of image quality.

Then we come to the question of resolution. Canon finally unified the EOS-1 line with the EOS-1D X. I’ve always felt the split in the past was due to technical limitations more than anything else. With a single layer senor, you can only reduce the image size to reduce the amount of data being read and therefore increase the frames per second.

However, if Canon implemented a system similar to what I described above, there’s no need to break the 1D line apart again to meet both FPS and resolution targets. Never mind, current Digic 5+ processors are within spitting distance of being able to process the 300- 14-bit megapixels per second needed to support 25MP at 12FPS—the Canon 70D’s Digic 5+ is already pushing 141.4 14-bit MP/s , just  9.6 MP/s short of what’s needed.

Moreover, if you can do 12 FPS with 25MP of data, you can do 4 FPS with the full 75MP of data from reading all colors at all points. In one body, you get a reasonable frame rate at really good quality, or a high frame rates at a quality that would likely be better than Canon’s current EOS-1D X or 5D mark III.

In short, while it might not be what Canon ultimately brings to market, the rumor certainly passes my sniff test of what seems feasible. Of course this is all speculation based on a rumor reported by a rumor site. There’s no telling whether Canon will actually bring something like this to market, but sometimes it’s fun to speculate.

Rolling shutters, jell-o-cam, anybody who’s serious about video production and using SLRs or even high-end cinema cameras should be familiar with the effects of rolling shutters. Even if you’re not, if you’ve ever watched a video that has an airplane’s propeller in it and it’s anything but straight, then you’ve seen the effect of a rolling shutter.

Back in 2011, Zacuto rounded up a number of industry players and put the state of the art, at the time, of cameras to the test. These included things like the Red One, Arri Alexa, Sony F-35 on the high end, and a number of more approachable VDSLRs like Canon’s 5D Mk II, 7D, 1D Mark IV, and Nikon’s D7000. Their final segment of the 3-part series included tests for rolling shutter performance.

They used two test mechanisms, a drum with vertical lines and a rotating disk. Combined they simulate pretty much all the cases you run into with rolling shutter. The drum covers panning and the disk covering rapidly rotating objects, like a plane’s propeller.

I’ve replicated them in a fashion, though this is spit-and-bailing wire engineering at its finest. Both tests are powered by my trusty DeWalt cordless drill motor rotating somewhere between 0 and 450RPM. The disk test used a circle of black matte board with some white gaffer tape strips placed across it, the line test used an old zip-tie container (so old in fact I broke it trying to put a hole in it), wrapped in white paper with strips of black gaffer tape added for lines.

Like I said, these aren’t strictly scientific tests, due to breaking and an off center hole, my drum test is considerably more wobbly than it could be, and I have no idea at what RPM I’m actually spinning at. Both tests were shot with a 1/500th shutter. At 1/60th, there was way too much motion blur that the disk became essentially gray instead of having distinct lines. The 5D shots are at f/4 ISO 800, the EOS M at f/5.6 ISO 1600 due to the lenses used.

That said, not having done this in nearly as much of a high tech & controlled manner as the Zacuto team did, I don’t think you can readily compare my results to theirs.

Near Zero-Budget Video Studio Project

VDSLRs have provided a huge amount of latitude to photographers and videographers in where they can shoot and the quality of that shot. The large sensors and high ISO capabilities makes lighting much less of an issue and the advances in LED light sources have even made the lighting much more portable and easy to deal with.

That said, if you’re considering doing any kind of long running production and want any kind of consistency to the set you need a studio of some kind. But where? How big does it have to be? And what about a set?

Read the rest of the story »

Our cookie and privacy policy. Dismiss