Storage is one of the biggest unanticipated hurdles that I’ve come across when it comes to shooting and working with video. And not in the field actually, my biggest problems are storing and backing up the video I shoot in a reasonable way.
This is a simple hack using a colorimeter that has an ambient light reading mode, such as the i1 Display 2 or Colormunki Display, and the ArgyllCMS package’s spotread tool as a incident light meter.
To camera settings to EV use my exposure settings to EV conversion tool.
It’s another year, and the peak of another hurricane season, and living in South Florida means there’s another hurricane headed our way.
As things currently stand it’s still too early to really know how things are going to shake out. The National Hurricane Center is currently predicting a major hurricane in the category 3 range with winds around 115 MPH will make landfall somewhere along the Florida peninsula. Though with the storm still more than 3 days away, the actual strength and location is far from clear.
My family and I have already started implementing our hurricane plans and should be as prepared as we can be by the time the storm arrives. However, as you can imagine this is also an incredibly stressful time for us, and as they say the best laid plans never survive contact with enemy.
Ultimately what this means for this site, and the YouTube content I’ve been producing lately, is that there will almost certainly be a gap in the coming content due to the impacts of this storm. As things stand, there are already videos scheduled to be published through September 9th, but after that I’m not sure how much of a gap will likely ensue.
I’ll continue to update this post as time and information allows.
The other day, I was thinking about how as a digital photographer I’ve lost a little bit of the inherent understanding of my medium that film photographers had. When film is your medium, you’re always reminded of just what exactly is being captured and produced when you make an image. You can’t avoid seeing just how small the image really is.
However, with digital photography, the size of the sensor isn’t something I have to really deal with on a day to day basis. Sure, like any competent photographer I know that the bigger the sensor the more light it collects, or how sensor size affects depth of field. However, the true size of the image I’m dealing with is largely lost in the day to day workings.
So lets have a quick look at just how big the images we start from are.
I’ve written before that there’s utility in ubiquity. That is when something is available everywhere, that is worth something.
This is kind of utility is one of my bigger criticisms of Nikon’s Z series mirrorless cameras, or at least their choice of XQD for storage.
It’s also the reason the bulk of my storage is now in SD cards and not compact flash. This is even though I vastly prefer the physical size of CF cards, and they’re up to 50% faster in my cameras. Ubiquity means that I can read an SD card in my laptop, or buy a cheap reader pretty much anywhere. Never mind it means that I pay about half as much per GB for SD cards as I do for compact flash.
However, today I want to talk about image formats, specifically WebP.
My interest in WebP, and to a lesser extent other more modern lossy compression formats (such as the HEIF format that Apple’s is using on new iPhones) mainly lies in their use in reducing bandwidth and page load times on my website here. As a photographer, image quality is important to me. However, at the same time, smaller images improve page load speeds and cut down on the amount of bandwidth used by the page (a double plus in my book).
The problem is, as usual, support, or as I put it earlier ubiquity.
I recently got back from a trip to Yosemite National Park to do some photography. Like most trips, I ran into some problems and learned a few things on this outing and I wanted to talk about these lessons.
First, a quick bit of back story. I’m increasingly shooting more panos and HDRs, as part of my work, and these images require post processing. Having been burned by messing up constituent images in the past, when I’m on a long enough trip (i.e., more than a day) I like to have my laptop so I can check and stitch images and potentially go back and reshoot something when possible.
To do this, however, I need to have a card reader, and with that I need the requisite cables to connect it to my computer. In this instance, due to a last minute change in gear I pulled the wrong cable, my USB cable, out of my bag of cables and left it home. Rendering the computer dead weight, and making it impossible to evaluate the images effectively in the field when there was a possibly to reshoot things.
Okay, maybe that’s a bit hyperbolic. The USB-C connector itself is not a bad idea, in fact, I really like the connector. A reversible, highly reliable connector is a great thing not just for mobile devices but for pretty much everything.
However, that’s about the end of where I find the situation tolerable. I understand the USB working groups desire to make it the connector to end all connectors.
The problem with USB-C is that it breaks a fundamental part of good design: discoverability. That is, it should be immediately obvious to users what is supported without having to either guess, or plug something in and find out if it works, doesn’t work, or works at some reduced level of functionality.
I was worried when they announced all of the potential alt modes (TB3, HDMI, DisplayPort, etc.) that it would become confusing for consumers. I thought, at the time, that I would be able to keep things straight.
Unfortunately, that didn’t happen at all. Most consumers, in my experience at least, have no clue about what’s going on with their USB-C ports, and even less of a clue why stuff does or doesn’t work when they plug it in. Never mind that thanks to its use on laptops in lieu of dedicated (and in some cases better, such as MagSafe) connectors it’s basically becoming a glorified charging cable in many respects.
I just finished writing an article looking at MJPEG and H.264 and how they compared to each other in at least one test scene. In that article, I found myself wanting to write a section about why Canon may have chosen the codecs they did when they did, only it didn’t really fit and as I wrote it it got really long.
Canon has definitely received a lot of flack across the internet for the way their cameras, at least 5D mark IV, 1DX mark II, and to a lesser extent the EOS 1DC, record 4K video. Broadly, the complains seem to come down to two things.
- The files are huge
- They shouldn’t have used MJPEG for “reasons”
Unfortunately, when it gets to those reasons, well that’s when things get problematic. The biggest implication seems to be that if Canon had used H.264 instead of MJPEG the files would have been both better and substantially smaller.
The problem is, I’ve had a very hard time finding meaningful comparative tests of the two codecs on equal footing. There’s a lot of “but H.264 is better,” but I couldn’t actually find much in the way of head to head comparisons done against the same reference files to show by how much and under what conditions.
With that said, this is something that’s been bugging me for a while. On one hand, I’m very much in the group that would have liked to have had smaller 4K files on my 5D mark IV. On the other hand, my background in computer engineering tells me that it may not have been possible with the time and processing equipment at hand.
Oh yea, this article is a bit of a ramble on cameras and engineering in general. I’d apologize, but the reality is you can’t really condense complex discussions in to a 500 words and not lose all of the complexity.
So Canon has finally announced their full-frame mirrorless camera, the EOS R.
Personally, I think Canon is dammed if they do and damned if they don’t here. No matter what Canon releases people are going to be disappointed, upset, and more than likely the media is going to come down on them as if they’re the most incompetent dunderheads in the world. This is already the case if you keep up with the rumor mill.
As for me, I’ve said this the past two
rudderless mirrorless rambles that I’ve become less of a fan of mirrorless cameras than I was in the past. A year ago, this wasn’t the case, but in getting back to my roots with wildlife photography, I’ve very found that the necessary power consumption of them is a big negative for me.
Moreover, while I understand the benefits for using the sensor for everything, I don’t see it being nearly as much of an amazing point of awesome that I want to toss my
antiquated junk DSLRs in the garbage and buy amazing new slightly different mirrorless cameras to replace them. Consequently, while I’m pretty sure I’m going to pick up a mirrorless camera sooner or later, I don’t see it as a replacement but as a complement for my existing DSLRs.