Raw is a Rip Off!! Why Do DSLRs have Low Dynamic Range Raw? Log Gamma, CR2
It makes ZERO sense to me that the raw data from the sensor would not be stored as the log of the actual value. That way the full spectrum from brightest to darkest would be recorded with an even number of bits instead of half the bits being used just for the brightest "stop" on a 10 or 11 stop range.
Well... guess what? Arri and Red, Sony and even Canon use log encoding for their Cinema Raw formats.
"UPDATE: Correction/Clarification. OK, there is room for more confusion as I have been reminded that ArriRaw uses Log encoding as does RedCode. It is also likely that Sony’s raw uses data reduction for the higher stops (as Sony’s raw is ACES compliant it possibly uses data rounding for the higher stops). ArriRaw uses log encoding for the raw data to minimise data wastage, but the data is still unencoded data, it has not been encoded into RGB or YCbCr and it does not have a white balance or have gain applied, all of this is added in post. Sony’s S-Log, S-Log2, Arri’s LogC, Canon’s C-Log as well as Cineon are all encoded and processed RGB or YCbCr video with a set white balance and with a Log gamma curve applied."
How about Canon? Their current still EOS raw format is CR2.
I found one old paper on digital photography - back before it was common. It was discussing this whole brightness scale problem and basically said it was impractical to record the data linearly because you would need 12 -14 bits to adequately describe a decent grayscale pixel.
At that time, a 1 MB image was pushing things. So you need to use a log gamma curve.
(gamma is a function - or curve - or transform - depending on your jargon preference, that takes the raw info on pixel brightness as stored in the image file and converts it to the brightness that ought to be visible in the output - like a monitor. Best to Google or look on Wikipedia for a fuller definition).
Well, now Canon uses 14 bits in their new cameras! (formerly 12). So we have fulfilled the destiny of dumb encoding that even an ancient digital imaging paper saw as impractical. Just another example of huge tech advances allowing implementations to be inefficient and inferior.
Using Log gamma means that if the darkest black is 1 and your whitest white is 10 stops higher - so 1024 - you could record the black as 0 (2 to the power zero is 1) and 1024 could be recorded simply as 10 (2 to the power 10). So now your ten stop scale is written as 1 - 10 instead of 1-1024. But more importantly, instead of the brightest of brights - the 10th stop - taking up a full half of that range (513-1024), the different tones are evenly allocated. The brightest get 9-10 and the darkest 0-1 If you use the same number of bits, you're going to record a lot more detailed tone info at the dark shadowy end.
Side Note: NOT Really Raw.
One thing for sure, the Canon CR2 format is not "raw" in the sense of just being able to read a sensor value off the file. It's compressed in "lossless JPEG" format (whatever that is).
I think it's kind of TIFF like format. Not really relevant to my point, but interesting....
Canon Knows Log Gamma!
The "Cinema" C300 (pictured above - kind of a DSLR in a video camera body) has a Log gamma compressed format - where the data was on a log scale and then compressed. That claims a 12 stop range - vs the max of 11 typically claimed for full frame I think (10 on aps-c like mine. Canon 7d dynamic range).
Apparently, pro VIDEO cameras have had a log compressed format for a few years (Sony, Arri, Red, and Canon too). The claim was that you got the same processing flexibility in post (or better?) than you get with Raw with much smaller files. "Raw" is a linear encoding of the brightness striking a pixel. Way more bits used to describe the bright part of the spectrum than darks. One of the things that actually distinguished RED 1 in 2006 was its 12 bit log raw format (that required acceleration to edit directly).
The Video guys seem way more on top of this stuff and more sophisticated about it than the photo guys.
Canon (and other) still cameras (except maybe Red Cameras used for still work?) use linear gamma curves - so they record far less dark data.
I think this article from Canon Cinema itself makes it about as clear as I'm going to find:
It explains that Raw still images are very big - 25MB+ (and 125MB once opened in Photoshop), and just aren't very practical for Cinema.
Red's first codec was the big innovator.
Now all the majors have "compressed" raw video formats or close to it using log gamma curves.
Postscript: I looked up this stuff because I read about a technique for DSLR Raw shooting called "expose to the right" or ETTR in the blogs. It advocates increasing exposure just to the point where you start to clip whites. That's often higher exposure than the typical 12-18% gray that is the standard for exposure meter proper exposure. The idea is that it reduces the portion of the available Raw encoding range dedicated to the brights by pushing it as high as possible. That leaves more of the range for midtones and darks. You would then bring the exposure down to something that looks good when processing the raw image. This seems crazy to me, but if raw still formats are using linear gamma, then maybe it does make sense.