Tuesday, April 8, 2008

HDR quick grabs

Digital SLRs have sensors whose analog output is converted to digital information, generally at 12 or 14 bits per pixel. The common analogy is to think of each photosite on a sensor as a bucket that can hold a certain number of electrons of either red, green, or blue light. Once the bucket overflows, you have a blown-out highlight in that location. (Technically, the sensor doesn't have pixels. Like a TV, an image is interpolated from the RGB values around it.) The larger the sensor on a camera, the more electrons a single photosite can hold. That's why a 6 MP image from a DSLR is always going to look far better, with far less noise, than a 6 MP image from a small point-and-shoot.

Anyway, you have 12 to 14 bits of information converted from each photosite. When that's converted to JPG, you lose some information because a JPG is 8-bits-per-pixel. You may lose some detail in a highlight, or a shadow might be clipped. If you shoot in RAW format, the camera holds the full information coming off the sensor, without converting to 8-bit values.

What an HDR photo does is expand the bits-per-pixel to 24 or 32, and then combines bracketed images to fill in info that may have been missing from a single shot. (Either the highlights overflowed or the shadows weren't exposed long enough to come out.) Once you've done this, you can use software to choose how you want these pixels mapped back to the visible 8-bit or 16-bit space. The result is a compression of the overall dynamic range into values that can be displayed in an image, just like the superior human eye can register them.

Last night I took a series of photos from a terrace in Tudor City. The first shows what a typical photo of the Empire State Building would look like. It is probably eight or nine stops brighter than the nearby buildings, so you can't get both the tower and the buildings properly exposed.





That was the JPG out of the camera. I took the RAW file (16 bits of data instead of 8) and ran it through Photomatix Pro. It expanded the image range to 32 bits-per-pixel, and then let me remap the tones. I overdid the color saturation a bit, but you can see how much more information is stored in the original image, versus what you get from the JPG.





I also bracketed a series of four photos. It went from so underexposed that you couldn't see much beyond the detail of the tower, up to exposed so you could see buildings but the tower was overly bright. I then used Photomatix to pull all the image data from these four JPGs into a single HDR file, and then mapped them all back to 16-bit space. (And then to 8-bit.)


This was a handheld bracketing, so there was a bit of misalignment and probably camera shake as well. This would look a lot better taken from a tripod. But note how you can now see the detail in the nearby buildings as well as the far brighter ESB tower.


I've taken a number of other HDR landscapes, all available in this collection as well.

2 comments:

Unknown said...

Very cool.

You can use Auto-Align in photoshop and then Merge to HDR for your handheld shots. Also, you can do the autoalign, and then mask out the layers by hand, so you get the 'important' stuff at each exposure.

Joshua Trupin said...

I used auto-align in Photomatix. I was never able to get the hang of HDR in Photoshop, but Photomatix is quite simple to use. Sometimes the results are great, sometimes they aren't.