# Fun technical challenge: Take a photo of an image on your display and compare the two.



## spacediver (Dec 2, 2015)

Hope this is an appropriate place to post this. If not, please advise on where I should post this.

Nutshell version:  Load up these two images on your computer display, and take a photo of your display. Try to render the photo so that it matches the original photo as closely as possible_. _Use whatever method you prefer.

First image

Second image

Here is a test pattern that may help you calibrate your exposure for optimizing dynamic range. You can probably use this for getting the white balance correct.

Long version:

I'm quite new to photography, and have been exploring it through a very technical route. I've been using my Canon EOS 450D for scientific imaging of CRT displays. One of the things I've been doing is accurately reproducing colors, by turning my camera into a colorimeter that is highly accurate for my display.

A fun way to test this is to take a photo of a nice wallpaper on your monitor, and try to render it so that it looks close to how the original looked on your screen. Comparing the images also gives a unique insight into the quality of image reproduction - you can literally look at the original image and the photo, side by side on the same screen.

Here's a breakdown of my process:

First, I used a colorimeter (X-Rite i1Display Pro) to measure the XYZ values of the three primaries of my display. I then took a RAW image of each primary, and using Matlab, I measured the average value of each of the three channels in the RAW image (each of which corresponds to a different filter on the color filter array overlaying the sensor). Using this information, I created a matrix that converts values from the RAW image into XYZ values. 

I then determined which shutter speed would make best use of the camera's dynamic range, relative to the dynamic range of my display. I used a test pattern on my display that contained 16 bars ranging from black to peak white. This also allowed me to calculate the normalizing factor for the Y value (which represents relative luminance).

I then chose two pretty images, and took RAW photos of them. Using the cameraRGB to XYZ matrix I had created earlier,  I subsampled the RAW image, and transformed each 2x2 square of of sensel data into a single X value, a single Y value, and a single Z value. I then transformed these values into linear sRGB values, and then into gamma corrected sRGB values (using 2.4 as the exponent as this is what my display is calibrated at). Note that this approach is different from conventional demosaicing algorithms: those attempt to preserve the full sensel resolution, whereas my approach trades half the resolution for an increase in image accuracy.

One thing I may also experiment with is implementing some dithering to preserve as much of the original 14 bit data as I can. But for now, it works fairly well.

Given that my display is fairly well calibrated to sRGB standards, the original image and my photo should look similar on any display. For what it's worth, the original images quite a bit superior than my photos, when the original is viewed at full size. Also, my display is a Sony GDM FW900.

Here are the original images and the photos of them. I've resized them for viewability on this forum


----------



## Scatterbrained (Dec 2, 2015)

Why do I get the feeling that you're working in a self reinforcing circle here?


----------



## spacediver (Dec 2, 2015)

Scatterbrained said:


> Why do I get the feeling that you're working in a self reinforcing circle here?



A self reinforcing circle would be directly converting from sensor data to RGB, using the following steps:

CameraRGB ---> XYZ (based on colorimetric measurements)
XYZ - mydisplayRGB (based on a colorimetric profile of my display).

The fact that I used the standard XYZ - sRGB conversion matrix for the second step, and that it still worked means that my display was already very well calibrated to sRGB standards. A self reinforcing circle would be more accurate, and would be the ideal way to do this sort of thing. But you'd still need colorimetric data to do this. I suppose you could use trial and error to find a linear transform from CameraRGB to mydisplayRGB - you could probably write some code to figure out the solution without ever worrying about using a light measuring device.

This is all assuming I'm grasping what you mean by "self reinforcing circle".

one other thing - the method I used only works when imaging my particular display. The transformation matrix from CameraRGB to XYZ is specific to the spectral properties of my display primaries. If I wanted to accurately image a different display, I'd have to re-measure my camera's response to that display's primaries.


----------



## Scatterbrained (Dec 2, 2015)

What I meant is that from the outside looking in it appears as if you're working in circles.  Chasing your tail so to speak.  Granted I apparently don't get exactly what it is you're doing here.   I've photographed my monitor and not noticed any color difference.    If I understand what you're doing, you're calibrating the monitor, then trying to calibrate the camera (which records a 14bit raw file) to "accurately" photograph the 8 bit image displayed on the monitor.  I guess I just don't get the "why" here, as I've photographed displays and never noticed a color issue in the recorded image.    Usually the color issues I see are from color space mismatches or un-calibrated displays.


----------



## spacediver (Dec 2, 2015)

Two main motivations:

1: The technical challenge of accurately reproducing scene color. Take a photo of this image, and compare the original with your photo. It's unlikely that the chromaticities will be as close as they would have been had you used a method like mine, or some other sort of processing. The actual chromaticities (of a "default" image) will be a function of the interaction between the spectral signatures of your display primaries, and the spectral transmission functions of the three RGB camera filters.

2: Perhaps more interestingly, it offers a very cool way to directly experience the image reproduction quality of your camera. You can take a picture of an image, and then load the photo you took and compare it side by side. The medium of the original scene (the display) is the very same medium through which the photo _of that scene _is being viewed.


----------



## Scatterbrained (Dec 3, 2015)

Seeing the image reproduction quality of a camera is better served by taking an actual photograph I would think.  How accurate does it render a computer screen? About as accurately as it reproduces the real world I'd imagine.    

Accurately reproducing scene color for _what_ exactly?  I think this is where you're loosing me.   I use a color checker to profile the light with every shoot.  I use a high quality professional graphics monitor and keep it calibrated with a spyder elite.  Accurately reproducing color isn't hard to do.   What you seem to be doing here is some kind of "Photoception".   You're taking a photograph; displaying it on a monitor, taking a photograph of the photograph that is displayed on the monitor and then displaying the photograph of the display of the photograph against the display of the photograph.     I'm just curious as to why?     You could create a generic color profile for your camera by shooting a color card under a light source with a high CRI and then images of properly calibrated displays should look just fine.  Uncalibrated displays will look just as uncalibrated.     Beyond all this, I'm curious why you need to be able to take photos of displays in such a manner?


----------



## spacediver (Dec 3, 2015)

Scatterbrained said:


> Seeing the image reproduction quality of a camera is better served by taking an actual photograph I would think.  How accurate does it render a computer screen? About as accurately as it reproduces the real world I'd imagine.




Taking a photograph of the "real world" is certainly one way to assess reproduction quality. However, I believe that taking an image of a display offers a _unique_ opportunity here.

First, this method removes stereopsis as a variable: The real world is extended in three dimensions, whereas a photograph is limited to two dimensions. If one had a three dimensional imaging device, and holodecks were a real thing, then sure, one could take a "holo image" and go into a holodeck and compare it against the original scene (or one could use 3d glasses).

Taking an photo of an image on a display also removes depth of field as a variable, as the "scene" exists on a single depth plane.

Second, it is hard to do a side by side comparison between a photograph of a real world scene, and the original scene. The human visual system is most adept at comparing things when those things are presented close together in time and space (e.g. we are excellent at judging whether two patches of color are the same or different when they're presented right by each other, but our discrimination thresholds rise dramatically when those two patches are separated by a period of time, in which case one must use memory, rather than incoming visual information, to form these judgments). Even in a holodeck scenario, it is hard to imagine how one would present a side by side comparison of the original scene and the holodeck recreation, for simultaneous comparison. Not to mention, the original scene may have changed in subtle or dramatic ways by the time the comparison is made. The approach I'm suggesting does allow a side by side comparison.

Third, a decent camera is likely capable of capturing the full dynamic range of a computer display, so the rendered photo and the original image do not vary as dramatically as they would between a photo and a real world scene. This also means that the eyes are under the same state of adaptation when viewing the original image and the photo of that image.

Fourth, the original scene is something that everyone participating has access to. So we can all compare our results against each other. Granted, the quality is going to be dependent upon the quality of the display, but that's also part of the fun  . But at least those who participate in the challenge get to do a side by side comparison on their own displays, and see a direct comparison for themselves.



Scatterbrained said:


> Accurately reproducing scene color for _what_ exactly?  I think this is where you're loosing me.  I use a color checker to profile the light with every shoot.  I use a high quality professional graphics monitor and keep it calibrated with a spyder elite.  Accurately reproducing color isn't hard to do.  What you seem to be doing here is some kind of "Photoception".  You're taking a photograph; displaying it on a monitor, taking a photograph of the photograph that is displayed on the monitor and then displaying the photograph of the display of the photograph against the display of the photograph.    I'm just curious as to why?    You could create a generic color profile for your camera by shooting a color card under a light source with a high CRI and then images of properly calibrated displays should look just fine.  Uncalibrated displays will look just as uncalibrated.    Beyond all this, I'm curious why you need to be able to take photos of displays in such a manner?



So, for me, the motivating factor that led to this challenge is that one of the things I'm doing is imaging the phosphor mask on my CRT. It is important to me that I reproduce these colors with as much fidelity as possible, so that people who view the images of these phosphors can be confident that they are seeing an accurate reproduction of the colors (assuming that they are viewing it on a calibrated display).

But quite apart from this practical consideration, I am curious to see how faithfully other people can reproduce colors on a display, via photographic imaging, without going through all the steps I did. For example, go ahead and take a photograph of that red-green-blue pattern I linked to in my previous post, and do a side by side comparison. It may be the case that they match perfectly, using your method of using the color card as a calibration reference. And that's great! But have you ever done this with the express intent of carefully making a color comparison judgment? If so, then this thread isn't for you, as you'll probably find it boring. On the other hand, it would be informative to showcase your technique with the rendered image as proof of its efficacy. For those who aren't used to this sort of thing, this challenge may offer a cool learning experience. And the photoception description isn't quite accurate: the original image doesn't have to be a photo - it could simply be the red green blue pattern I created and linked to earlier. It's just more informative when you use a more complex image, because then you get to assess a wider array of parameters, such as sharpness, color, dynamic range, overall image quality, etc.


----------



## dennybeall (Dec 5, 2015)

Take your camera and go out and take some pictures........................


----------



## Designer (Dec 5, 2015)

Then there's that whole "backlighted" vs. "front lighted" issue.


----------



## spacediver (Dec 5, 2015)

Designer said:


> Then there's that whole "backlighted" vs. "front lighted" issue.



what do you mean?


----------

