Photography

Do we really still need HDR?

Some video Wen sent me – high-speed HDR by NASA using multiple cameras – prompted me to think about this topic again.

The power of the human eye (and the brain behind it) really never ceases to amaze when you consider what it’s capable of. We can see perfectly well on a bright day, but well enough with the same eyes to walk around a dark room at night without bumping into things. At least not too many things. In challenging scenes, like a dim room with a window out to an exceptionally bright day, cameras often fall apart and display a pitch black interior and an over-exposed outdoor scene; but our eyes seem to render this perfectly, at least as far as our brain is concerned – we can see interior details and still see a normal, sunny day out the window. It’s all quite remarkable.

The fundamental problem above is one of dynamic range – when there’s a huge variation between the brightest and darkest parts of a scene, it can be a real challenge for the camera to deal with. The camera has a limited dynamic range, so it basically has to pick whether to capture shadow detail and blow out the highlights (turning everything white), or to capture the highlights and turn everything in shadows to pitch black. One approach for dealing with this which is becoming increasingly easy – and popular – is High Dynamic Range (or HDR) imaging – in which the camera takes multiple shots at different exposures to increase the total range of light values that can be captured.  So it takes one shot that preserves the highlights, another that captures the shadow detail, and then combines these later (usually in software, but sometimes in camera).

In reality, the problem is only in part what the camera is capable of capturing; monitors have less dynamic range than cameras, and print has less dynamic range than a monitor.  Wikipedia cites 10-14 stops for the human eye, 11 for DSLRs (less for compact cameras), 9.5 for computer LCDs, and 7 for prints. So a lot of HDR is actually about how to map what the camera captured (whether in a single exposure or with multiple) into what can be displayed or printed. How that mapping is done determines whether the end result is a natural looking recreation of how our eye perceives an image – or a more artistic/dramatic interpretation of things. I’m personally not a big fan of the over-the-top HDR style – it’s just not what a non-photographer like me would use to capture their kids! But I must admit that some images produced this way are pretty interesting.

2009

About once a year, I try and generally fail to produce an image using HDR. In part, this is because I’m a non-photographer – I never carry a tripod (which is important if you want multiple exposures of exactly the same thing), and HDR is so rarely useful in pictures of the family that I’m not willing to spend money on HDR tools. I’m sure my lack of knowledge is a bigger factor, of course. My first not-so-successful attempt was in Yosemite, taking a picture of Half Dome:

I consider the attempt not so successful because the end result is just kind of boring. I hadn’t started doing any post processing on any photo back then, but even so, the above image was sort of flat. It wasn’t a total failure, mind you; the original default exposure for the above scene looked like this:

As you can see, Half Dome itself is blown out beyond recognition, yet the trees are still very dark, and only the reflection looks about right. So I guess the HDR version is preferable for remembering what it looked like being there, even if it’s not a particularly compelling photo. Of interest might be that I used a free tool that you can find and download online, called Qtpfsgui, to create the above. It was pretty straightforward to use, and offered a lot of different tone mapping options. I’m sure that it’s possible to produce better results than the above by using the tool more effectively than I did. It’s a good cheap way to give things a try!

2010

My 2010 attempt came as I was boarding a flight from Tokyo back to Toronto, as the sun was setting. As with the above example, I only even thought to try HDR because without it, everything was a silhouette against the setting sun:

Besides still being boring, the above fails for another reason – once you bring up the detail in the shadow areas, all the reflections off the glass I shot through become painfully obvious. Since this was literally after showing my boarding pass and on the way to the plane (not the one pictured, mind you), I didn’t really have a chance to get a better picture, go flush up against the glass, etc.  And as usual I had no tripod so I was just balancing the camera away from the window itself. For this 2010 attempt, I had gotten suckered into buying Photoshop CS5, which has HDR capabilities – this was my first attempt to use it for that purpose. It was easy to import things; however, I found that the options for tone mapping the HDR were very limited, and I couldn’t produce any sort of interesting results with it. Oh well!

2011

On to 2011! A couple of weeks ago, I was in Naperville, IL – my company has a big office there – and went for a walk since I had to fly in the day before to make an early morning meeting. Another too-much-dynamic range scenario came up but this time, without something to prop my camera up on, I tried doing some hand-held HDR in spite of my past failures. The original, default exposure image was as follows:

The processed image looks a good bit different – and overall, I like it much better:

However, as you probably guessed from the title, this isn’t the story of how I fell in love with doing HDR. Indeed, while I took 3 handheld exposure bracketed shots of the above, trying to using Photoshop to do HDR with the image failed miserably. First, without any sort of support, the alignment between shots was off by enough that Photoshop’s “auto-align” feature didn’t do a great job at putting things back together. Second,it was windy, so the leaves and especially the flag was notably different between shots. And finally, I couldn’t tone map things in an any reasonable way using CS5 (my lack of knowledge again, I’m sure).

So what’s the above then? I took the middle exposure in what had been intended for HDR use – and applied a large dose of local adjustments. Essentially, I darkened the sky and increased its clarity, brightened the foreground, especially brightened the middle column of stairs, added a little vignetting, and a few other things. It was actually quite time-consuming to do this; the sky needed about -1.5 stops of adjustment relative to the foreground, but when you make adjustments that big, if you cross even slightly into the building, you see a big black splotch; if you don’t come right up to its edge, you get a noticeable halo. The above isn’t perfect, but it’s good enough for me!

The above is possible for one simple reason; the dynamic range of modern DSLRs is simply awesome.  The D7000 has 13.9 stops of dynamic range at ISO 100; if you compare that to the 10.3 stops at ISO 200 of the D70 (the great-grandparent of the D7000, released 7 years earlier), it’s like having +/- 2 stops of exposure bracketing on every single shot. If there’s enough light to shoot at base ISO, there is just so much to work with that doing HDR just seems like an unnecessary nuisance. As long as you shot in RAW, that is; with JPEG you’re stuck with the first image!

Now, if Lightroom could import multiple RAW images into a single HDR image and then let you do all the usual adjustments, that would still be a real win; sometimes, you don’t have enough light to shoot at ISO 100, or you have more dynamic range than the scene above. But for the most part, I just can’t see myself using any dedicated HDR tools or the built-in HDR functionality in CS5, when the natural dynamic range of the D7000 combined with some local adjustments seems to do all I need!

Leave a Reply

Your email address will not be published.