# 16 bit (or more) RAW files?



## Vautrin (Mar 13, 2013)

So I've got a question that's I've been wondering at.

Most computers and electronic devices use bytes.  You have an 8 bit processor, a 16 bit processor, a 32 bit processor.

But never a 14 bit processor

Cameras however, tend to be 12 or 14 bit.  My Nikon D700 uses 12 bit raw files, unless I specifically select 14 bits...  Canon appears to have the same problem if I do a quick google.

Hasselblad, however, uses 16 bits.  And if you look at the skin tones on a hassy, they're much better than a nikon or canon

So what's the deal?  Why don't Nikon and Canon use 16 bit files?

And why stop there?  My computer is 64 bits, wouldn't even a 32 bit depth provide a much better photo


----------



## amolitor (Mar 13, 2013)

The sensors tend to be 12 or 14 bits deep. That's all the bits the sensor has per pixel.

You can trade off megapixels for bit depth, in general terms (the math works out, that is. technically, you need some noise, but there's always noise so you're probably OK). I suspect strongly that this is why the Hasselblad images look better, if indeed they do and it's not just post processing.


----------



## Vautrin (Mar 13, 2013)

In case anyone is bored and wants to see some hassy images you can download and play with:

Sample File Images


----------



## 480sparky (Mar 13, 2013)

8-bit RGB color depth creates more colors than what the (average) human eye is capable of perceiving.  What makes you think 16- or 32- or 64-bit depth would somehow be better?


----------



## amolitor (Mar 13, 2013)

The 'blad sensors might be 16 bits deep. Not sure, and I'm not sure it actually has any value. Mainly more bits give us more exposure latitude, but you COULD crush the range a bit and get finer tonal gradations within 12 stops or whatever, if you wanted to build your sensor that way.


----------



## Mike_E (Mar 13, 2013)

The Blad is a Fuji.  Just sayin.

The usual reason that medium format looks better is that the sensor size to focal length ratio, along with the fact that medium format lenses are usually great optics.


----------



## Ballistics (Mar 13, 2013)

Vautrin said:


> So I've got a question that's I've been wondering at.
> 
> Most computers and electronic devices use bytes.  You have an 8 bit processor, a 16 bit processor, a 32 bit processor.
> 
> ...



I shot with 39MP Hasselblad and I didn't notice any difference in colors from my D7000. I'm looking at them now, and I don't see this.


----------



## SCraig (Mar 13, 2013)

Vautrin said:


> And why stop there?  My computer is 64 bits, wouldn't even a 32 bit depth provide a much better photo


One aspect is the reality of the situation.  A 32-bit or 64-bit color depth would ostensibly display better colors, however a 24 megapixel image would create an image file in excess of 100mb uncompressed at 32-bits and over 200mb uncompressed at 64-bits.  Data transfer within the camera would be enormous, an 8gb SD card would only hold about 85 images (at 32 bits per pixel, half that at 64 bits), any software trying to handle the file would be horrendously slow, etc.  Plus, there would be so much overkill on the color depth that it's unlikely that anyone would really see much of an improvement over 14-bit color.


----------



## Vautrin (Mar 13, 2013)

480sparky said:


> 8-bit RGB color depth creates more colors than what the (average) human eye is capable of perceiving.  What makes you think 16- or 32- or 64-bit depth would somehow be better?



Well if I look at my computer monitor it'll say it's using 32 bit color.

32 bit color looks much better than the old graphics that used 16bit or 8 bit...

On top of that, I haven't seen color depth get raised since most computers started using 32bit color standard...

Plus, if 8bit was really enough, we'd never have to worry about getting out of color gamut...

Thus my hunch is there is perceivable difference...


----------



## Vautrin (Mar 13, 2013)

Ballistics said:


> Vautrin said:
> 
> 
> > So I've got a question that's I've been wondering at.
> ...



Which Hassy and which raw file format?

B&H seems to imply some of the older models could save to 8bit tiff files, makes me wonder the difference in color

Hasselblad H3DII-39 SLR Digital Camera Kit with 80mm Lens


----------



## Ballistics (Mar 13, 2013)

Vautrin said:


> Ballistics said:
> 
> 
> > Vautrin said:
> ...



DNG and Hasselblad 503cw with a  CFV-39 back.


----------



## BrianV (Mar 13, 2013)

The ratio of the brightest signal that can be captured (well capacity) to the noise limited signal of the sensor tends to set the max and min range of what is digitized in a meaningful manner. 14-bits does a good job with most high-end sensors on the market. Beyond 14-bits, you are recording noise.


----------



## 480sparky (Mar 13, 2013)

Vautrin said:


> Well if I look at my computer monitor it'll say it's using 32 bit color.
> 
> 32 bit color looks much better than the old graphics that used 16bit or 8 bit...
> 
> ...



8-bit color depth gives you over 16 million colors. The human eye can only distinguish around 10 million of them.

12 bit gets you 68,719,476,736 colors. 14 bit gets you 4,398,046,511,504 colors.

16-bit depth ups it to 281,474,976,710,656. Seriously.... 281 _trillion_.

If you want to go to 32-bit depth, you're looking ar roughly 7.92EE28 colors. 64-bit? Hold on to your slide rule........ *6.277EE57*.

That's a helluva lot of distinct colors!

You're saying you can perceive all those?


----------



## Helen B (Mar 13, 2013)

Vautrin said:


> 480sparky said:
> 
> 
> > 8-bit RGB color depth creates more colors than what the (average) human eye is capable of perceiving.  What makes you think 16- or 32- or 64-bit depth would somehow be better?
> ...




Don't get confused - 32 bits is for all channels. Probably 10 bits per color if you are lucky.


----------



## KmH (Mar 13, 2013)

8-bit color depth is per channel, and is also known a 24 bit color - 8 bits x 3 color channel = 24 bits.
The 34 bits used to describe an electronic display is also describing multiple channels.

Image sensors are analog, and have no bits at all. Image sensors cannot actually record color either.

When it says a camera makes 12-bit or 14-bit depth Raw files they are taking about the output of the Analog-to-Digital converter (A/D). Analog-to-digital converter - Wikipedia, the free encyclopedia

The A/D coverts the analog voltage values the pixels develop when exposed to light to digital numbers.

8-bits = 256 possible discrete values (0 - 255)
12-bits = 4096 possible discrete values (0-4095)
14-bits = 16,384 possible discrete values (0-16,383)
16-bits = 65,536 possible discrete values (0-65,536)

Here is the kicker though - Photoshop 16-bit mode only uses 15-bits - 32,768 bits (0-32,767).

32,678 tonal ranges per channel is for human vision purposes way more than sufficient to describe the data coming off a digital device.
From an engineering perspective, 15-bit calculations give an exact midpoint value. Having a precise midpoint value is important for blending.

Olympus Microscopy Resource Center | Digital Imaging in Optical Microscopy - Introduction to CMOS Image Sensors


----------



## amolitor (Mar 13, 2013)

As for number of distinguishable colors, well.

Remember that the RGB value in the image file has to convey value information as well as color information, AND there are issues with color gamut that I do not fully understand but which imply that the colors we're representing on any given output medium are not all the colors we can see. So you really need to encode a fair bit of extra, um, somehow or something, to account for various output media. Or something. I told you I didn't really get the gamut issues.


----------



## Garbz (Mar 14, 2013)

Vautrin said:


> So I've got a question that's I've been wondering at.
> 
> Most computers and electronic devices use bytes.  You have an 8 bit processor, a 16 bit processor, a 32 bit processor.
> 
> ...



Ok a primer. 

Computers work in bytes because thats the way process registers are set up. Mathematical functions on an 8bit CPU are done with an 8bit arithmetic logic unit on sets of 8bit registers. Through the use of overflow and error flags however the system scales quite well. You want to add 16bit numbers? Well that works the same way as we add numbers with a result greater than 10. An 8bit CPU like the AVR microcontrollers I work with has no problem doing maths on 16bit numbers. The problem is in the amount of effort involved. To add two 8bit registers with an 8bit processor takes 2 instructions with 3 clock cycles. Adding two 16bit numbers takes 3 instructions with 5 clock cycles. The easiest way to get this back down to 3 clock cycles is to upgrade the ALU to handle a pair of registers at the same time, and add a new instruction that automatically works on 2 registers at once. I.e. a 16bit instruction. And it grows from there.

Now looking at the analogue world our useful data is not limited by bits, but rather by the noise floor. What's the point of having 16bits of data if statistically the bottom 4 bits will be 100% random? It's a waste of electronics and valuable chip space (remembering that on a CMOS sensor the analogue to digital conversion is done on the sensor). In a camera where every tiny component is using valuable space, and every bit of data from every pixel takes valuable processing time the goal is not to waste time or space on processing zeros or processing random data. By making custom circuits that work directly with the amount of *useful* data we have available the system becomes faster. Just like my AVR microcontrollers which have a 10bit ADC I don't bother reading the low ADC register as it increases the time it takes to process the data. 

Now how does this relate to the real world? Well you may be happy with your Hassy in a studio but frankly I would be supremely pissed if my DSLR had a max continuous firing rate of 0.7fps like the Hassleblads or the sub 2fps of the Leica M8. And now for the real kicker. This has nothing to do with skin tones. The gamut and colour depth of even a 10bit sensor is enough to render skin tones correctly on a screen. This all depends on how your camera processes the data, or one step further, how your RAW processor processes the data. No amount of bits in a file will change the fact that some algorithms just don't look quite right (Adobe Standard I'm looking at you and your excessively purple skin tones).

Finally is it worth while? Well if DxO mark results are to be believed then the 14bit D800 beats both the 16bit Hassleblad H3DII-39 and Leica M8 in every metric. Colour doesn't look right to you? Maybe you need to calibrate or adjust the colour profile in the RAW software you use. 




480sparky said:


> 8-bit RGB color depth creates more colors than what the (average) human eye is capable of perceiving.  What makes you think 16- or 32- or 64-bit depth would somehow be better?



Perception is only for the final product. The question of if a 64bit image is "better" is a resounding yes. More data is better for post processing. You can pull a lot of data from dark shadows of a 14bit image. HDR software can do a world of wonder when you take multiple exposures and convert it to a 32bit file and then compress the tonal range back to 8bit for display. But for me this still isn't good enough. The software I use for astrophotography stacks literally hundreds of images and creates a 64bit file. Causes grief to work with 400mb files but that data is actually necessary to get results.



480sparky said:


> 8-bit color depth gives you over 16 million colors. The human eye can only distinguish around 10 million of them.



Which 10million? We can display 6.7million more colours than the eye can see with an 8bit file yet somehow when we show the result we see visible banding on the display when using larger gamuts than sRGB. The human eye has an incredible ability to detect subtle shades of saturated colours, especially around the greens. It's excellent at distinguishing tones of shadows but sucks for identifying colours in darkness. So while the human eye may have difficulty distinguishing between (0,1,2) and (0,1,1), it can most definitely determine the difference between (0,254,0) and (0,255,0) and most likely you'll be able to determine shades in between. Unfortunately the numbers we can display are evenly distributed between the red green and blue channels. Unfortunately our eyes don't work like that so quite simply 16.7million colours isn't enough on wide gamut monitors.


----------



## Vautrin (Mar 14, 2013)

this is fascinating stuff

so, if i choose to use 12 bit raw files will i get the same picture quality as 14 bits? 

will 12 bits have less noise but also less shades?


----------



## KmH (Mar 14, 2013)

Noise will be the same, but:
12-bits = 4096 possible discrete values (0-4095) per color channel
14-bits = 16,384 possible discrete values (0-16,383) per color channel

So per color channel, yes, 12-bit shows 12,288 fewer shades of color per color channel than 14 bits can show.


----------



## Garbz (Mar 15, 2013)

Vautrin said:


> this is fascinating stuff
> 
> so, if i choose to use 12 bit raw files will i get the same picture quality as 14 bits?
> 
> will 12 bits have less noise but also less shades?



The two questions are independent from each other. You won't be able to tell the difference between a 12bit and a 14bit recording of a scene. In fact your video card will clobber it to 8bits to display on the screen anyway. 
However you start boosting the shadows in any kind of extreme way and things start looking VERY different. As KmH has said you have more possible shades to represent an image. So when you want to stretch the colours in an image you need some data that isn't normally visible. To illustrate this look at the following example:

This first image is the direct result of a stack of 200 images of the Orion nebula. This shows what my (32bit in this case) file looks like when I first open it on the computer:






So after we brighten, brighten again, apply a few layers of tonemapping, brighten again for good measure, increase saturation, and fix the colour balance:





The below result was a direct duplicate of all the above steps with all the same settings. The only difference is it dropped to 8bit at the start. When we did that there was no visible change on the computer screen. But have a look at what happens after we apply all the above corrections:





So as you can see the data that may not be visible is sometimes quite important.
In case you're interested this is what it looks like when I dedicate more than 5 minutes to processing the same image: http://dafaq.garbz.com/photography/space/images/M42.jpg


The noise on the other hand depends on other factors and as I eluded to the important part is that camera manufacturers don't waste processing power and silicon to processing nothing but noise. You can do analogue to digital conversion at any bitrate you want. The question is if the least significant bits will be relevant data or noise.


----------



## Vautrin (Mar 15, 2013)

wow garbz thanks for the explanation

the finished image is really cool


----------

