A few months ago I posted a video describing a hack that I was using that allowed me to use my colorimeter as a basic light meter. In response to that video, someone suggested that I should really just get a light meter. Of course that was my plan all along, but I didn’t want to stop shooting just to find and buy a light meter.
In the past I’ve written down my take on what I though about the features of recently announced cameras. Now that I’m diving into doing video, I thought I’d take a stab at doing that same thing but as a video.
In this case, Canon recently announced the latest generation of the EOS-1DX, the EOS-1DX Mark III. And while in many respects, the camera is really just yet another evolution of the professional DSLR, there are some interesting new features in the design, that I wanted to talk about.
Storage is one of the biggest unanticipated hurdles that I’ve come across when it comes to shooting and working with video. And not in the field actually, my biggest problems are storing and backing up the video I shoot in a reasonable way.
This is a simple hack using a colorimeter that has an ambient light reading mode, such as the i1 Display 2 or Colormunki Display, and the ArgyllCMS package’s spotread tool as a incident light meter.
To camera settings to EV use my exposure settings to EV conversion tool.
It’s another year, and the peak of another hurricane season, and living in South Florida means there’s another hurricane headed our way.
As things currently stand it’s still too early to really know how things are going to shake out. The National Hurricane Center is currently predicting a major hurricane in the category 3 range with winds around 115 MPH will make landfall somewhere along the Florida peninsula. Though with the storm still more than 3 days away, the actual strength and location is far from clear.
My family and I have already started implementing our hurricane plans and should be as prepared as we can be by the time the storm arrives. However, as you can imagine this is also an incredibly stressful time for us, and as they say the best laid plans never survive contact with enemy.
Ultimately what this means for this site, and the YouTube content I’ve been producing lately, is that there will almost certainly be a gap in the coming content due to the impacts of this storm. As things stand, there are already videos scheduled to be published through September 9th, but after that I’m not sure how much of a gap will likely ensue.
I’ll continue to update this post as time and information allows.
The other day, I was thinking about how as a digital photographer I’ve lost a little bit of the inherent understanding of my medium that film photographers had. When film is your medium, you’re always reminded of just what exactly is being captured and produced when you make an image. You can’t avoid seeing just how small the image really is.
However, with digital photography, the size of the sensor isn’t something I have to really deal with on a day to day basis. Sure, like any competent photographer I know that the bigger the sensor the more light it collects, or how sensor size affects depth of field. However, the true size of the image I’m dealing with is largely lost in the day to day workings.
So lets have a quick look at just how big the images we start from are.
I’ve written before that there’s utility in ubiquity. That is when something is available everywhere, that is worth something.
This is kind of utility is one of my bigger criticisms of Nikon’s Z series mirrorless cameras, or at least their choice of XQD for storage.
It’s also the reason the bulk of my storage is now in SD cards and not compact flash. This is even though I vastly prefer the physical size of CF cards, and they’re up to 50% faster in my cameras. Ubiquity means that I can read an SD card in my laptop, or buy a cheap reader pretty much anywhere. Never mind it means that I pay about half as much per GB for SD cards as I do for compact flash.
However, today I want to talk about image formats, specifically WebP.
My interest in WebP, and to a lesser extent other more modern lossy compression formats (such as the HEIF format that Apple’s is using on new iPhones) mainly lies in their use in reducing bandwidth and page load times on my website here. As a photographer, image quality is important to me. However, at the same time, smaller images improve page load speeds and cut down on the amount of bandwidth used by the page (a double plus in my book).
The problem is, as usual, support, or as I put it earlier ubiquity.
I recently got back from a trip to Yosemite National Park to do some photography. Like most trips, I ran into some problems and learned a few things on this outing and I wanted to talk about these lessons.
First, a quick bit of back story. I’m increasingly shooting more panos and HDRs, as part of my work, and these images require post processing. Having been burned by messing up constituent images in the past, when I’m on a long enough trip (i.e., more than a day) I like to have my laptop so I can check and stitch images and potentially go back and reshoot something when possible.
To do this, however, I need to have a card reader, and with that I need the requisite cables to connect it to my computer. In this instance, due to a last minute change in gear I pulled the wrong cable, my USB cable, out of my bag of cables and left it home. Rendering the computer dead weight, and making it impossible to evaluate the images effectively in the field when there was a possibly to reshoot things.
Okay, maybe that’s a bit hyperbolic. The USB-C connector itself is not a bad idea, in fact, I really like the connector. A reversible, highly reliable connector is a great thing not just for mobile devices but for pretty much everything.
However, that’s about the end of where I find the situation tolerable. I understand the USB working groups desire to make it the connector to end all connectors.
The problem with USB-C is that it breaks a fundamental part of good design: discoverability. That is, it should be immediately obvious to users what is supported without having to either guess, or plug something in and find out if it works, doesn’t work, or works at some reduced level of functionality.
I was worried when they announced all of the potential alt modes (TB3, HDMI, DisplayPort, etc.) that it would become confusing for consumers. I thought, at the time, that I would be able to keep things straight.
Unfortunately, that didn’t happen at all. Most consumers, in my experience at least, have no clue about what’s going on with their USB-C ports, and even less of a clue why stuff does or doesn’t work when they plug it in. Never mind that thanks to its use on laptops in lieu of dedicated (and in some cases better, such as MagSafe) connectors it’s basically becoming a glorified charging cable in many respects.