DSLR Exposure: ETTR - "Expose to the Right"
I read an article yesterday advocating what the author called "Expose to the Right" exposure settings for digital cameras.
(By the way, almost everyone who tries to justify this technique points to that article or basically quotes it. At least from my Googling).
Normally and exposure meter recommends settings that average the scene to about 12-18% reflective gray.
That's supposed to be based on typical scenes or whatever. It's ancient lore. There are corrections for bright backgrounds and whatnot and cameras are pretty good about it these days.
For shooting on Film the conventional wisdom was that it's best to use the lowest exposure you can get away with and still get adequate shadow detail (expose to the left - if you think of dark on the left and bright on the right). That makes some sense because the more you expose a negative, the more material is washed away in the developer - till it's totally transparent. You can compensate for a dark negative by printing it lighter. If you overexpose, and there's nothing left on the film, you can't print it darker.
For digital cameras things are different: The argument for "expose to the right" (optimal exposure is just at the threshold where you lose too much detail in the bright areas) is interesting.
Human light perception is logarithmic. Twice as bright seems just like a linear increment. Every time you double the light, it appears to be just a little brighter by the same perceived amount. Camera lenses are calibrated that way (one F-stop - opening the iris - doubles the light). And shutter speed dials are the same. 1/125, 1/250, 1/500.
Doubling is the natural increment.
But camera sensors don't work that way. Pretty sensibly , doubling the light produces double the electrical response. The "raw" format image encodes those electrical responses just as they come. A typical digital camera sensor can record a 10 "stop" range ( darkest x 2 x 2 x2.... 2 brightest- ten doublings from darkest to brightest).
But (the argument says), it records that range linearly - so the final brightest area takes half the scale available! Most sensors record each of the R G & B pixels with 12 bits.
That's 4096 levels of brightness. 2048 of them are used for just the brightest - final doubling - area of the image.
1024 for the one before that. So you only have 1024 shades for the whole rest of the image brightness range - which usually includes more important stuff.
SO: The idea is that by pushing all of the brights farther to the right (overexposing till you start to lose detail), you will have more bits on the scale for encoding the rest of the image. You just lower the exposure back down in Photoshop. The brights will still have more than their share of bits so you will be able to recover what looks washed out in the unprocessed overexposed image.
(There's a lot of debate about this idea. I don't understand why cameras wouldn't just apply a log transform to the signal before encoding the data. I wouldn't have thought that was a big deal) I tried ETTR yesterday. Hard to tell for sure, but I think I got more noise in the mid tones. But I can't say for sure. And I don't know why I'd get that result.
One writer who seemed technically knowledgeable (here's his PDF) argued that Expose to the Right (ETTR in the blogs) does make sense, but not because of the argument above.
It works because more is gained by reducing noise in shadow areas - where there isn't enough light for a clear signal than is lost by brightening the highest highlights.
It's not about the encoding, but about the sensor itself and how well it gives a clean image at low light levels. It does better the more light it gets. Clearly true for small sensors. My iPhone is AWESOME if there's bright even light. It's a POS in low light. So this argument makes more sense to me. You can't fix a sensor that just needs more photons to get a good signal, but you could easily fix simple numerical encoding of a scale.