# How to determine the dynamic range of a camera - my experiment



## pgriz (Mar 9, 2013)

I decided to test my T1i to see what its dynamic range is in actual fact, and whether my meter reading is accurate.

The testing protocol was to shoot a 18% grey card (fully occupying the field of view), varying the shutter speed by one stop for each exposure.  The card was successively over-exposed and under-exposed until the grey point was either all white (255) or all black (0).  The RAW frames were imported into DPP, and the values of the center point were verified against the camera reading.  To make sure that the light was constant, each sequence was terminated by a shot replicating the starting point for the sequence, and the light values were compared (they were constant).

Two sequences were done, at ISO 100 and ISO 800, to determine if the dynamic range was changed by the change in ISO.  To ensure that all the colour channels were reading the same values, a custom white-balance was done on the grey card at the start of each sequence.

I draw several conclusions from my experiment.  When my meter is centered, the output value of the grey card is 122 (should be 127 if exactly centered, but close enough).

At ISO 100, 3-stops overexposure gives me a value of 251, and 4-stop overexposure give me a value of 255.  So my upper limit is 3-stops overexposure.  At ISO 800, 3-stops overexposure gives me a value of 250, and 4-stop overexposure gives me a value of 255.  Therefore it appears that raising the ISO to 800 does not reduce my upper dynamic range.

At ISO 100, my readings for the center point value is as follows: -3(20),-4 (10), -5(5), -6(2), -7 (0).  For all practical purposes, my lower limit for underexposure appears to be 4 stops, because a value of 10 is pretty close to black on most monitors.  Repeating the exercise at ISO 800, the readings are: -3(19), -4(10), -5(4), -6(2), -7(1).  Raising the ISO did not diminish the dynamic range at the lower end.

Practically, this test means that if I am practicing the exposure to the right technique, I must be sure that the brightest detail is no more than 3 stops above the 0 reading to ensure theres useful detail.

Im curious if anyone else tested their camera to determine the actual dynamic range? If so, what did the measurements reveal?


----------



## tirediron (Mar 9, 2013)

Neat!


----------



## Josh66 (Mar 9, 2013)

Interesting...

I wonder what RGB value each zone would have in a test like that.

If Zone V is a properly exposed grey card, that should be 128,128,128 (right?).  Zone IV would be about 102,102,102; Zone III would be roughly 76,76,76...

I think I will test my camera tomorrow to see how it falls out on the zone system...  See if one stop is one zone...


----------



## pgriz (Mar 9, 2013)

O|||||||O said:


> Interesting...
> 
> I wonder what RGB value each zone would have in a test like that.
> 
> ...



That was one of the reasons I tested for the dynamic range - whether a 10-zone system would fit within the dynamic range of my camera (in terms of 1 stop = 1 zone).  Unfortunately, it doesn't, at least with my camera.  However, knowing that my upper range is 3 stops, I can spot-meter the highlights and adjust my exposure accordingly. 

As for your question about the different values in the RBG space, that was why I used a custom color balance, to ensure that the values were the same for each channel.

I'm going to use the data from this experiment, to develop a "characteristic curve" as described in "Light, Science and Magic, 4th edition".  It is obvious from the data, that the graph is not linear, but has a short shoulder on the upper end, and a gradual shoulder on the lower end.


----------



## pgriz (Mar 10, 2013)

The data determined by my experiment on the dynamic range of the camera allowed me to put together a characteristics curve, shown below:




As discussed in Light, Science and Magic, 4th edition, pg. 244-253, an ideal characteristics curve would be linear, with each change in aperture having a linear increase or decrease in the density of the grey.  Actual sensor performance is different.  I therefore was interested in how my camera would behave, and what latitude I had in terms of the dynamic range.  The above graph shows the behaviour of my camera in transforming an 18% grey card to values on the resulting image.  

I welcome comments from more experienced photographers as to the interpretation of the curve as seen above.


----------



## 2WheelPhoto (Mar 10, 2013)

Nice 
	

	
	
		
		

		
		
	


	




I'm non-scientific, give me 2 cams same time of day with same lens and let me see the difference in post


----------



## 480sparky (Mar 10, 2013)

Maybe I'm missing something, but isn't dynamic range defined as the difference in the amount of light in a scene from the brightest to the darkest?

And isn't an 18% gray card ALL the same brightness?

I think in order to test the DR of a camera, you'd need subjects of different brightnesses.  I'd guess your results would be exactly the same even if you used a black card or a white card.


----------



## jake337 (Mar 10, 2013)

480sparky said:


> Maybe I'm missing something, but isn't dynamic range defined as the difference in the amount of light in a scene from the brightest to the darkest?
> 
> And isn't an 18% gray card ALL the same brightness?
> 
> I think in order to test the DR of a camera, you'd need subjects of different brightnesses.  I'd guess your results would be exactly the same even if you used a black card or a white card.




Maybe three cards next to each other.  Black, grey, white.  Expose for grey and using additional light on the white untill you lose detail and subtractive lighting on the black till it is pure black and measure it how far your camera will go in either direction while keeping the exposure on grey constant.


Lol, I have no idea.


----------



## 480sparky (Mar 10, 2013)

jake337 said:


> Maybe three cards next to each other.  Black, grey, white.  Expose for grey and using additional light on the white untill you lose detail and subtractice lighting on the black till it is pure black and measure it somehow.
> 
> 
> Lol, I have no idea.




I would think you would need a set of cards, very much calibrated, each  one reflecting twice the amount of light as the next darker on, in order  to make such a test valid.

Let's say you could find a set of 20 such cards.  Line 'em up and take a  photo.  Let's say 3 of the darkest ones come out totally black, and 3  of the lightest totally white.  Then I would accept a result of 14-stop  DR.


----------



## pgriz (Mar 10, 2013)

480sparky said:


> Maybe I'm missing something, but isn't dynamic range defined as the difference in the amount of light in a scene from the brightest to the darkest?
> 
> And isn't an 18% gray card ALL the same brightness?
> 
> I think in order to test the DR of a camera, you'd need subjects of different brightnesses.  I'd guess your results would be exactly the same even if you used a black card or a white card.



Well, there are two ways that "dynamic range" can be understood.  The first, which you allude to, is the range of brightness in a scene.  The second, which is what I was testing, was the ability of the camera to record that range of brightnesses in one exposure.  You could, as you're suggesting, get a series of cards with different shades of grey on them, so that each shade reflects exactly 50% of the card above it in brightness, or you can simulate the same thing by using the exposure, since each exposure difference of 1 stop reduces the amount of light by 50%, which is what I have done. 

The point of the exercise for me is to understand how to use the information my incident and reflected light meter readings are telling me about the scene so that I get the maximum amount of useful detail in my images.  Let's say my incident light meter reading is 1/100 at f/16 at ISO 100 (the sunny-16 rule), and a reflected reading off a highlight (say the top of a snowman) gives me a reading of 1/800sec at f/16 at ISO 100 (ie, 3 stops above the ambient).  Will an exposure set to the ambient light exposure cause the highlight to blow out or not?  According to my test, the highlight will just within the boundary of being blown out.  If the reflected highlight reading was 1/1600 sec, then definitely that highlight will be blown out.  In this case, to preserve the highlight detail, I would have to under-expose my shot by 1 stop.


----------



## 480sparky (Mar 10, 2013)

pgriz said:


> ..........The point of the exercise for me is to understand how to use the information my incident and reflected light meter readings are telling me about the scene so that I get the maximum amount of useful detail in my images.  Let's say my incident light meter reading is 1/100 at f/16 at ISO 100 (the sunny-16 rule), and a reflected reading off a highlight (say the top of a snowman) gives me a reading of 1/800sec at f/16 at ISO 100 (ie, 3 stops above the ambient).  Will an exposure set to the ambient light exposure cause the highlight to blow out or not?  According to my test, the highlight will just within the boundary of being blown out.  If the reflected highlight reading was 1/1600 sec, then definitely that highlight will be blown out.  In this case, to preserve the highlight detail, I would have to under-expose my shot by 1 stop.



Couldn't you just turn on the 'Blinkie' function of your camera to show the blow-out highlights?  That, or zoom in on the snowmans' head while watching the histogram?


----------



## Helen B (Mar 10, 2013)

I suspect that you could do it with two or three cards, if you didn't accept the accuracy obtained by simply changing the exposure via shutter speed. Say you had one that was 18% grey and one that was 36% grey. You could use those to check successive exposures when changing by one stop of shutter speed.

In practice you couldn't get 20 stops by reflection, so transmission is normally used - a Stouffer step wedge or, if you can afford it, a DSC Xyla. 

I'd be inclined to look at the raw data rather than JPEG, using RawDigger, for example.


----------



## Helen B (Mar 10, 2013)

pgriz said:


> 480sparky said:
> 
> 
> > Maybe I'm missing something, but isn't dynamic range defined as the difference in the amount of light in a scene from the brightest to the darkest?
> ...



Isn't that better called 'scene brightness range' or 'subject brightness range' - both of which have the same initials, conveniently.


----------



## pgriz (Mar 10, 2013)

480sparky said:


> pgriz said:
> 
> 
> > ..........The point of the exercise for me is to understand how to use the information my incident and reflected light meter readings are telling me about the scene so that I get the maximum amount of useful detail in my images.  Let's say my incident light meter reading is 1/100 at f/16 at ISO 100 (the sunny-16 rule), and a reflected reading off a highlight (say the top of a snowman) gives me a reading of 1/800sec at f/16 at ISO 100 (ie, 3 stops above the ambient).  Will an exposure set to the ambient light exposure cause the highlight to blow out or not?  According to my test, the highlight will just within the boundary of being blown out.  If the reflected highlight reading was 1/1600 sec, then definitely that highlight will be blown out.  In this case, to preserve the highlight detail, I would have to under-expose my shot by 1 stop.
> ...



Yes, that would be one way.  But I don't always use live-view in doing the shots, so knowing where the clipping "would" happen is still useful info.



Helen B said:


> pgriz said:
> 
> 
> > 480sparky said:
> ...



I agree that "scene brightness range" would be a better name, but I've seen "dynamic range" used often when describing exactly that, so I assumed this was common usage.


----------



## runnah (Mar 10, 2013)

You lost me at math.


----------



## TCampbell (Mar 10, 2013)

I don't think there's any reason to use multiple colors.  I had wondered about the 18% gray ... thinking that the T1i may be calibrated to 12% gray. 

But in any case... to those thinking you'd need a card with multiple gray tones, I don't think you would.  Suppose you're shooting a scene that requires a lot of dynamic range ... some objects are well lit and near light and some are dimly lit and near black.  It's not as if the dark objects are dark because someone painted them a darker shade.  By simply reducing the light in a known and controlled manner that can be gathered from a source of a known consistent brightness you are effectively testing the dynamic range.

The results you're getting are pretty consistent with what I've seen.

Well done!

BTW, I happen to have a Sekonic L-758 light meter.  The meter can "profile" the dynamic range of a camera, but to do it they offer a few options.  The option they WANT you to use is a special gray card that costs about $140 (if I remember).  I noticed BorrowLenses.com will "rent" you the card for about $10 (more reasonable.)  The card has, if I can remember (it's on the shelf behind me somewhere) about 24 shades of gray.  They want you to photograph the card bracketing for normal, -3 and +3 stops.  You then import the images into some analysis software.  It does require that you take care to guarantee the lighting is even across the whole surface of the card (the card is fairly large) and between all three exposures (which is trickier than you might think.)  If you do it right, they product a camera profile for YOUR camera which is then uploaded into the light meter.   When you then sample the subject taking multiple meter readings, the meter tells you what exposure to shoot and plots little arrows along a line showing the limits of what YOUR camera can handle and also arrows showing the meter readings that you sampled.   Basically... the meter helps you insure that you'll get the full dynamic range of the image into the shot based on the capabilities of YOUR camera and/or tells you if the image exceeds the dynamic range your camera can handle.


----------



## pgriz (Mar 10, 2013)

runnah said:


> You lost me at math.



What math?  I didn't run a regression curve-fit on that.


----------



## pgriz (Mar 10, 2013)

@ Tim Campbell:  I thought that the calibration was lower, since the 18% gray card was coming in at 122 value (when the meter was zeroed) instead of the 127 that I was expecting.  So if the camera is actually calibrated to the 12% gray, that would make sense.  As for the light-meter, I've got the L-358, and it's working well for what I need.  Having done the curve, I feel pretty confident that I can put the highlight detail where I want it.  Another thing I discovered doing these tests, is that the Magic Lantern spot-meter thingie (which is supposed to determine where in the range of 0-255 the spot will land) is consistently underestimating the exposure.  On the other hand, running the spotmeter on the captured image gives me the same values I get when uploaded to the computer (RAW file) and measured directly.  So the testing I've done tells me that if in the captured image it shows a highlight detail has a value of (say) 240, this will match the image when uploaded to the computer.  I'm one of those people that needs to connect the dots between when I start with, and what I end up with.  This has been a good learning exercise.  Thanks for your comments!


----------



## 480sparky (Mar 10, 2013)

TCampbell said:


> .........But in any case... to those thinking you'd need a card with multiple gray tones, I don't think you would.  Suppose you're shooting a scene that requires a lot of dynamic range ... some objects are well lit and near light and some are dimly lit and near black.  It's not as if the dark objects are dark because someone painted them a darker shade.  By simply reducing the light in a known and controlled manner that can be gathered from a source of a known consistent brightness you are effectively testing the dynamic range............



FWIW, I tried this exact same idea on all three of my cameras.  I came up with pretty much the same results for all three.... a D60, a D7000 and a D600.  And it didn't matter what ISO I used on any of 'em.  They all came back as 11-12 stops.  In short, using this method my D60 at ISO 1600 had the dynamic range as my D600 at ISO 100.


----------



## Mully (Mar 10, 2013)

Digital grey card scale is 12% and not 18% (left over from the paper days).


----------



## pgriz (Mar 10, 2013)

@ sparky:  I've heard from some of my shooting buddies, that they are getting better dynamic range with their Nikons compared to the Canons.  I'm hoping to replicate the test with actual cameras to see if that's the case.


----------



## 480sparky (Mar 10, 2013)

Mully said:


> Digital grey card scale is 12% and not 18% (left over from the paper days).



I used a Robin Myers digital gray card.


----------



## Mully (Mar 10, 2013)

So do I ......I like it being plastic unlike the old Kodak cardboard ones


----------



## 480sparky (Mar 10, 2013)

Mully said:


> So do I ......I like it being plastic unlike the old Kodak cardboard ones



And it's solid color throughout.  So if it gets scratched or stained, a little sanding takes care of the problem!


----------



## TCampbell (Mar 11, 2013)

Mully said:


> Digital grey card scale is 12% and not 18% (left over from the paper days).



Yet _most_ gray cards are still 18%.  You have to check carefully if you want a 12% card.  You can find them, but the majority seem to still mostly be 18% (and I've often wondered why, after all these years, the 12% gray isn't the common one.)

I suspect it's possibly because people use a gray card more commonly for white balance than they do for reflected light metering.


----------



## Helen B (Mar 11, 2013)

In no particular order:

The Sekonic calibration system for the 758 doesn't work on the raw files - it works on files that have had a tone curve applied, so it isn't great for determining absolute dynamic range - ie what can be done with the raw file. It works best for JPEGs. *The Sekonic system/software doesn't tell you the dynamic range of your camera without a lot of fiddling - which is easier to do without the software.* (I have the genuine Sekonic card if anyone want more details about this system.)

My comment about using two known reflectivities is that it allows you a check when using shutter speed to vary exposures. If you trust your shutter speeds you don't need it - you can use one reflectivity.

It doesn't have to be 18%, 12% or 50% for a dynamic range test. It doesn't matter as long as it is consistent and evenly lit. (Of course for white balance it is better if it is brighter than 18%.)

One difference between doing a series of different exposures of the same card in the same lighting and one exposure of a multiple density step wedge is the effect of lens flare. With the latter technique flare from the brighter patches will affect the exposure of the darker ones. With a succession of field-filling exposures of one brightness value, the effect of flare will not be as significant. That is one of the reasons for the design of the DSC Xyla. A Stouffer step wedge, on the other hand, was not designed for camera tests.


----------



## hirejn (Mar 13, 2013)

It's easier just to use a Sekonic meter (L-478 or L-758) and the DTS software, but if you're a math or science junkie and have a lot of time, you could go old school. Of course the Sekonic does cost upward of $400, but if you don't already have a meter, it's worth it.


----------



## amolitor (Mar 13, 2013)

ISO for a digital camera is a somewhat vague notion, and varies a bit manufacturer to manufacturer. It's surprisingly difficult to get a handle on, in the sense that Ctein had to really struggle with it. I would be very surprised to learn that a metered-correct grey card shot at precisely 127.

Any anyways, isn't it going to be more like 2047 on a 12 bit sensor, and 8191 on a 14 bit sensor?

This talk of 127 and 255 in the context of raw files has me a bit confused.


----------



## pgriz (Mar 13, 2013)

I use DPP for preliminary processing of my images.  The histogram in both the camera and DPP show it ranging from o to 255.  When I mouse over the image, the cursor information shows the X,Y coordinates of the pointer position, and the values of the three channels, ranging from 0 to 255.  So, those are the values that I am reporting.


----------



## amolitor (Mar 13, 2013)

So I am guessing that what you're seeing is something like 'the value the software intends to place at the pixel when you convert it to JPEG' or something?

As long as you're not making any adjustments (or are making identical ones to all the images, maybe?) it seems like it ought to be ok.

Thanks for the clarification!


----------



## Josh66 (Mar 14, 2013)

pgriz said:


> The data determined by my experiment on the dynamic range of the camera allowed me to put together a characteristics curve, shown below:
> 
> View attachment 38380
> 
> ...


I've read that book (not sure which edition right now, but what you said sounds familiar) - if I remember correctly, when the author mentioned the "ideal, linear" curve - he also mentioned that we actually prefer the 'flawed' 's-shaped' curve.  The ideal curve was only ideal as far as it applies to making a 'perfect' sensor.  Correct me if I'm wrong...

edit
A linear curve would make for a muddy, low contrast image, and we like contrast (at least a little contrast) - or something like that...


----------



## pgriz (Mar 14, 2013)

O|||||||O said:


> pgriz said:
> 
> 
> > The data determined by my experiment on the dynamic range of the camera allowed me to put together a characteristics curve, shown below:
> ...



Yeah, you're right.  On Pg. 254 of the 4th edition, the authors mention that while the RAW image captures sensor response being more linear, the RAW converters recreate the "shoulder" s-curves to match the behaviour we "expect".  Unfortunately, each RAW converter does it a little differently, and the authors seem to favour the idea of converting the RAW to DNG format, mainly because the DNG format processing is open while the RAW processing is proprietary. 

So based on that, what I was measuring was not the sensor response, but Canon's RAW conversion of the sensor response.  In other words, if I use a different RAW converter, I may end up with a different curve.
Hmmm.  That seems to reflect the earlier post by HelenB.  So to find out what my dynamic range is, I have to also consider the RAW converter being used.  UGH.  This is getting complicated.


----------



## Garbz (Mar 15, 2013)

pgriz said:


> Unfortunately, each RAW converter does it a little differently, and the authors seem to favour the idea of converting the RAW to DNG format, mainly because the DNG format processing is open while the RAW processing is proprietary.



That makes no sense. There's nothing about the proprietary vs open aspects of DNG that effect the result. To convert to DNG you still need to apply a calibration curve and colour profile. May as well convert to TIFF.


----------



## pgriz (Mar 15, 2013)

I am no expert on this, as I'm trying to understand what is actually going on "under the hood", so to speak, but it appears that the arguement for using DNG is that (according to my understanding of the authors in the book I referenced) the DNG format is the same as RAW but without the proprietary coding.  If, as you say, to convert to DNG you still need to apply a calibration curve and profile, then the questions becomes which calibration curve?  I will make the assumption that each RAW converter has a different curve, and it would interesting to understand how that curve is developed.  I have noticed that using the same RAW files as a starting point, and processing it with different brightness settings gave me more detail in both the highlight and shadow areas.   So there appears to be more information in the RAW file, some of which is discarded when being converted to a format we perceive as the image.  Therefore, in principle, I should be able to produce three versions of the same RAW file, and merge them using some form of HDR to get access to the true dynamic range of the sensor.  Does that approach make sense to you?


----------



## Parker219 (Mar 15, 2013)

Some pics in this thread would be nice!


----------



## Garbz (Mar 15, 2013)

pgriz said:


> it would interesting to understand how that curve is developed.  I have noticed that using the same RAW files as a starting point, and processing it with different brightness settings gave me more detail in both the highlight and shadow areas.   So there appears to be more information in the RAW file, some of which is discarded when being converted to a format we perceive as the image.  Therefore, in principle, I should be able to produce three versions of the same RAW file, and merge them using some form of HDR to get access to the true dynamic range of the sensor.  Does that approach make sense to you?



What you're describing is exactly the workaround many people used for HDR programs before they natively supported opening RAW files. In reality you just need to compress the dynamic range of the image before loading it into the RAW program. A 16bit file has enough dynamic range for all the data, you just need to ensure when you convert from RAW to TIFF you don't clip any highlights or shadows and let the HDR program take care of the rest.

The argument for DNG doesn't make any sense because it doesn't solve the fundamental program of the curve and colour calibration. If you use a program to convert the RAW to DNG then you're at the mercy of that program's settings. So why not just use that program to edit your RAW file anyway? If your camera natively performs DNG you get even less choice. 

As to how the curves are determined. It's typically either something representing Gamma 2.2 or the L* correction curve. Actual contrast settings then vary between manufacturer and so does actual colour space conversion. Adobe Labs provide a nice tool to allow you to customise your colour curve if you don't like the way Adobe Standard responds, and some manufacturers allow you to make custom DNG colour profiles (which is what all Adobe programs use to convert RAWs) based on calibration charts. It's all very unscientific.


----------



## Helen B (Mar 15, 2013)

The conversion from the camera raw to a DNG file does not usually affect the data: no 'calibration curve' or profile is applied to the data, the numbers remain the same. You can't apply a camera profile (other than a very simple matrix one that only defines the pure red, green and blue points) to raw data until it has been debayered, because the influence of the profile on a pixel is determined by the three-dimensional location of the pixel's colour values, therefore the other two values have to be computed first.


----------



## Garbz (Mar 16, 2013)

Helen B said:


> The conversion from the camera raw to a DNG file does not usually affect the data: no 'calibration curve' or profile is applied to the data, the numbers remain the same.



Helen I was under the impression that while the numbers remain the metadata in DNG allows for all adjustments and changes including any information needed to render the file to be embedded. Certainly at the very least all RAW adjustments allowed in the ACR are embedded directly into the file INCLUDING a custom camera colour profile. 

Without this information how could the DNG file be decoded in the future if it's a bit for bit copy of the proprietary RAW information? My thoughts were that the RAW data is converted to an open and standard data format (thus necessitating some interpretation of the proprietary data). Also the DNG spec allows for embedding the original RAW file which they note will approximately double the file size. The existence of this feature makes me think it's not a straight out copy of the data.


----------

