Over the last weeks of 2016, I’ve been working on a lot of behind the scenes upgrades to my site here. About half of those change have been behind the scenes tweaks and improvements that make things more efficient. The other half has been a bit of an overhaul of the theme, layout, and front end.
While I could go on to great lengths about the ins and outs of the design change and the code behind the scenes, that’s not really the purpose here. Instead, I want to talk about one specific area, displaying images on a computer, or rather how I’m increasingly finding computers a poor way to display images.
For a while now, my gallery has been somewhat unloved, and I’ve not been especially happy with it’s presentation either. One of my objectives, though I didn’t know it when I started, with this update cycle was to do something about the gallery to make it more appealing to me.
When the megapixel wars were in full swing, it wasn’t uncommon to find photographers poo-pooing the race to add more pixels. While there certainly is some truth to that point of view, it’s not entirely representative of the pixels situation.
The counter point to the fewer better pixels argument is that more pixels really do capture more detail. Moreover, even if at the pixel level they are noisier, in the resulting images the noise becomes finer grained and as a result less distracting.
The reality is, that so long as the increase in pixels isn’t causing a problem for other needs — for example, you aren’t limited to 3 FPS when you need 10 FPS to do you job — having more pixels makes better images overall.
However, there are a number of places where this does fall apart to some degree. And ultimately this is where the real balance has to be struck between more pixels, better pixels, and marketing numbers that sell the next generation of cameras.
I don’t generally make a point of posting updates about the goings and comings of content on this site, and in the past I’ve disappeared for months at a time with nary a word. However, this year I’ve been pretty consistently putting up at least something weekly.
That said, since I started Points in Focus, I haven’t been in the sights of a Category 4 hurricane either.
In fact, South Florida as a whole, hasn’t see a significant hurricane in over a decade.
So here’s the deal as far as content goes for October and November.
I already noted in my podcast a couple of weeks ago, that I would be backing off on normal written articles for a while. So that’s pretty much covered. I already had a lot on my plate for the second half of October and pretty much all of November so I wasn’t planning on getting anything significant out over that time period anyway.
On the other hand, I had intended to keep doing my weekly podcast through most of this time. I’ve already recorded and uploaded the podcast for Thursday Oct, 6, 2016. It will publish at it’s expected usual time.
Going forward, however, things depend entirely on the effects of Matthew on South Florida. Right now, by my estimates, the best case scenario I see is that my next podcast will be published on or after November 3rd, 2016. However, depending on the exact situation with Matthew, resumption of content may be considerable further in the future.
I’ll try to keep this post updated if things change significantly in the coming days.
Update 2016-10-14: The good news, for me and my family at least, was that hurricane Matthew passed far enough northeast of us that we didn’t receive anywhere near the winds and weather that I had anticipated, even in the best case. In fact, while we have notoriously unreliable power when there’s any kind of wind involved, we didn’t even lose power here.
That said, while we weathered the storm much better than I had expected, between the mental and physical stress of preparing for Matthew, and the process of moving all the crap we moved inside back outside to where it normally lives, I’ve decided to stick with my previous schedule of posting my next podcast on the 3rd of November.
For the last year or so I’ve been talking about features I’d love to see implemented in camera gear. This time I want to talk about having a proper continuous modeling light built into a hotshot flash.
Before I get into the meat of my idea, the first question one might ask is simply, what’s my problem with the status quo?
After all, most hotshoe flashes in addition to being a flash, have some form of modeling capability (by strobing the flash tube) along with some form of autofocus assist capability (typically with red or near-IR LEDs). After all they do provide the intended functionality to a large degree.
So what does having an actual LED modeling light enable that having IR autofocus LEDs or strobing the flash tube doesn’t?
I have two main use cases where I think this would be a beneficial idea. First is for people who need to shoot video. Specifically I’m thinking about photojournalists, who are increasingly being asked to get video clips for their publication’s website, but the idea applies elsewhere too. With things as they are now, you ideally need a speed light for stills in a wide range of conditions, and a separate video light of some sort — at least potentially — for video. With my idea, you’d just have your speed light.
The second case where I think it would be useful is in strait up using speed lights as a replacement for studio strobes. Admittedly this is also a personal point for me. I have nothing against studio strobes, I just never went down that path. However, I do a lot of product photography for this site, where I use speed lights in various modifiers (e.g., the Lastolite EzBox Hotshoe).
The thing with studio shooting is that you don’t want ambient light contaminating the colors in the images you make. That means generally you keep the studio dark so the only light comes from the strobes themselves. With speed lights, that also means you don’t have much if any light illuminating the subject matter for focusing and composition.
Okay, so clearly the idea here is that I’d like to see a continuous LED modeling light in a flash. Canon already did something kind of like this in their Speedlite 320 Ex. Well except not really. Yes, they put an LED on the light, but it’s not integrated with the zoom head, so it doesn’t actually model what the flash would do. Moreover, as far as I know it doesn’t track flash power, or have any kind of power adjustment at all.
This is a drum that bears beating as often as possible.
Backup your data.
I was reminded of this point again recently, when a friends granddaughter lost all, or at least most, of her collage school work and pictures when the drive in laptop died. And with no backups, I don’t even want to think about what that must have been like for her.
In my experience when I’ve done IT consulting work in the past, backups are one of the first things that get overlooked, ignored, or forgotten about. This is even more true, when the backup requires some level of user interaction.
Proving my point, when I mentioned the above incident to my parents, my mom realized that she wasn’t backing up her computer at work. When it was setup, it was setup so that it could be backed up, but the act of plugging in a USB stick and running the backup command… Well, lets just say it got lost in the cracks.
At the same time, backups need to be taken offline to be truly safe. If you’re backing up to say USB disk, and it’s still connected to the computer, the chance is always there that you might accidentally delete it. Or that one of the crypto ransom viruses, could encrypt it rendering it useless. Moreover, if the backup disk is still attached to power, there’s always the possibility that a power surge or lightning strike could destroy it.
In short, if the disk is plugged in; it’s not a good backup since it could be erased or killed by a power surge. But if the disk is left unplugged, then the act of having to plug it in becomes a hurdle to getting people to actually back things up.
Ultimately, I have 3 points in writing this.
Remind people to backup their data
Remind people that are backing up their data to make sure their backups are actually working
For those that aren’t backing up their data offer up some options that they might find useful to start backing up their data.
A couple of weeks ago, when I was talking about the D500 again, I started to develop the idea that we photographers should be demanding more from the companies the make our cameras. At that time I was just thinking about quality standards. That got me thinking. Better quality is great, but you still need to understand how the camera behaves to use it optimally.
When digital sensors replaced film, the cameras became more than just a vessel to hold the image forming material. They literally became the film too. Yet when this transition happened, we lost information that we previously had at our disposal too.
If you’ve never shot film, or at least seriously investigated it, you’ve probably never seen the graphs below.
These graphs are the characteristic curves for a piece of film, in this case Kodak’s now discontinued Kodachrome 64. More importantly they tell you almost everything you could want to know about the film and it’s capabilities.
I’ve been talking a lot recently about camera acclimation testing, and getting to know your camera. In that same vein I want to talk for a moment about personal minimums for image quality. I’ll be honest, I never thought a lot about how far I can push the quality before the results became unacceptable.
Moreover, I need to be right up front about this. I can’t tell you what you should consider the worst quality you’ll accept. What I can do is point at some areas that I’ve found illuminating or instructive and maybe that can lead you to figuring things out a bit better.
My first real exposure to just how far I could take things was back when I was shooting waterfalls in Alaska at ISO 1250 and 1600. I figured I wasn’t going to get anything worth keeping, but I had no idea if or when I’d have the opportunity again, so I shot anyway. I printed one of those images just for kicks. The quality isn’t great at nose limited distances, but it was perfectly fine at more reasonable viewing distances. Shocker! ISO 1250–1600 on the 5D mark III isn’t as unusable as I had though it was.
Noise is pretty much the enemy of everything in digital photography. Increasing noise levels limit dynamic range, color fidelity, and even cause compressed images files to become larger. Along with all of that stuff it also reduces the effective resolving power of the camera too. And that’s what I want to talk about a little today, at least put up some practical illustrations of.
Of course, the entire premise here ought to be pretty obvious to anybody who’s ever really looked at their images from higher ISOs. They just aren’t as sharp and don’t have the fine detail that lower ISO images have.
Unfortunately, this is also a place where very few reviewers ever really provide any kind of quantitative measurements either. That, of course, means that us photographers have to do the measurements ourselves, or just ignore the situation as a whole.
Figuring this stuff out, at least to a level that I’m satisfied with, is part of my camera testing and acclimation procedure. However, I’m also adding this information as part of my standard review data sets, and will likely be updating old camera reviews to include it when I get a chance.
I used my custom test target for these tests. You can find a download, and overview of that target here.
With that said, before I dive into the data, I do need to get a couple of things out of the way.
Since my trip to Alaska in 2015, geotagging has become something that I’ve become increasing interested in. There were a number of images that in review I would love to know where I was when I took them. Unfortunately, barring some specific circumstances, one fjord looks largely like another, and tagging after the fact in Lightroom isn’t really feasible for a lot of the images.
Of course, the easy way to geotag is to have a camera that’s capable of handling it on its own. This was one of the big selling points I saw in the Canon EOS 1DX mark II, and one of the features I was disappointed in both the Canon 80D, as well as the newest Nikon bodies the D5 and D500.
Barring a built in GPS, a compatible external GPS unit that can feed data directly into the camera is worth a thought — perhaps even a real long hard thought.
At $250 Canon’s GP-E2 is rather expensive for what it is, and while it can communicate with most modern Canon cameras, including my 5D mark III, through the hot shoe that means it has to be in the hot shoe.
There are two huge positive points for using a GPS that can tag images directly in the camera. First, is you don’t have to worry about timing at all. When you take a photo, the camera reads writes out the current coordinates to the image and it’s properly tagged. There’s no messing with time zones or time offsets or keeping clocks in sync, and honestly this alone might be worth the price of entry.
The second positive point, at least in the case of Canon’s GP-E2 is that it has a digital compass, and logs not only location by direction. Admittedly Nikon users with the GP–1A units won’t have that feature. And honestly I’m not sure how important compass data really is in the grand scheme of things.
Moving even further away from automated in camera tagging is running a GPS unit that does track logging. Using an external logging unit requires the most amount of fiddling but has the potential to be the cheapest option as well. And of course, with an external device, the whole time syncing situation crops up — a problem that’s exacerbated by the decisions made by both standards bodies (EXIF) and software developers.
The final, and potentially cheapest option is to use a smart phone and some logging software to do the GPS logging. This of course, hash the advantage of using hardware that you probably already have (your smart phone).
That said, there are two downsides to using a smart phone instead of a dedicated GPS unit. First, is the simple fact that it’s going to drain your phones battery. If you have a big smart phone (like an iPhone 6s Plus) that may not be all that big of a deal, they have big batteries. If you have a smaller phone, like my iPhone SE, the battery drain can be significant, and all day use may be out of the question.
The second issue is one of ruggedness. Most stand alone GPS units are designed to be used in harsh environments. They’re usually water resistant to some degree, and often are build in rubberized hardened case that can take some bumps and drops. For a GPS receiver, where having an unobstructed view of the sky is kind of needed, drops and bumps become more of a potential problem.
Almost a decade ago, started hosting my blog with DreamHost. Needless to say, I’m no longer a DreamHost customer. I had intended to let this change pass without any mention. That was until I went through DreamHosts’s account closure process.
I think many times people on both sides of the fence, take these kinds of decision personally. And many people make decisions like this when they’re angry immediately in the aftermath of something happening.
I don’t bear any ill-will towards DreamHost, while their actions played a large part in why I left, so did their product portfolio at the time, and even my own preferences for how I wanted to do things going forward.
Being passionate about your business is important for small business owners. At the same time, it’s important to recognize that some customers are going to leave. Some will be happy; but external factors demand they go elsewhere. Some will be angry; because of something you’ve done, or haven’t not done. Some will be in the middle.
Moreover, these kinds of changes are stressful. Certainly to the customer, and potentially to the people who make up the business as well, especially if it’s a small one.
For the business, especially a small business, a customer leaving can raise all kinds of concerns. Why are they leaving? What are they going to tell people about your business? Aside from the direct loss in income, if they start telling people the business sucks, how might that going to affect future income?
As a customer, I don’t like being in the position where I’m being forced to explain myself before I can’t close my account.
In some very real ways, I think it’s a good sign that a business cares enough about their operations that they want to know why a customer is leaving.
At the same time, when a customer is leaving don’t get in their way or come across as demanding. The soon to be ex-customer does not owe you an explanation for why they’re leaving, and if they choose to share any information with you, you should be thankful for that and not demanding more.
This was the problem with my parting experience with DreamHost — demands that I answer questions about why I was leaving.
When I closed my account, I was presented with a survey about why I was leaving, where I was going and so forth. A survey that I couldn’t leave empty or partially filled out.
I don’t feel I owe DreamHost an explanation. I don’t feel my clients owe me that explanation either. And as a customer, I don’t like being in a position where I’m being forced to explain myself before I can close my account.
On what’s very likely to be my last interaction with this company, they wouldn’t just let me go in peace. As a customer, this relatively minor action was sufficient to prompt me to write and talk about something that I had no previous intentions to do. My final experience with DreamHost wasn’t positive, or even neutral. I’m going to remember this, and when people ask me who is a good hosting provider, I’m going to be much more inclined to say nothing or respond negatively, than recommend DreamHost now.