# The Next Revolution in Photography Is Coming



## KC1 (May 9, 2016)

While it's nothing new, and almost everyone knows it today it's still interesting to ponder. I grew up with film, as that was all that existed for most of my life. I have several digital and film cameras and almost always use digital because it is so easy to do. The convenience is what has sold me.
Food for thought:
Modern digital cameras use only 1/3 of the photons that reach the sensor, the other
2/3 of the digital image is interpolated by the processor in the conversion from RAW to JPG or TIF.
The image you produce is not the image you see, but a computer rendered version, there is no longer any such thing as a virgin, un-manipulated image as all images are 2/3 computer interpolation. SOOC means straight out of computer in today's digital world.

Time Magazine Article


----------



## KmH (May 9, 2016)

KC1 said:


> . . . Modern digital cameras use only 1/3 of the photons that reach the sensor . . .


That is not an accurate statement.

Digital cameras use all of the photons that reach the image sensor to record *the luminosity* of the various portions of a scene.
But, the pixels themselves cannot record color and only record how much voltage the pixel developed as a result of how many photons stimulated the light sensitive diode that is the heart of a pixel. The more luminous a part of the scene is, the more photons it is reflecting towards the image sensor pixels. The more luminosity the pixel 'sees' the higher the voltage the pixel develops.

Color is interpolated depending on the arrangement of the colored cells on a color filter array in front of the image sensor and the interpolation is done after the pixel voltages accumulated according to the luminosity the pixel 'saw' have been converted from an analog value to a digital number a computer can handle.

The RGB color model requires 3 luminosity values for each pixel - one each for the red, green, and blue. A single sensor element cannot simultaneously record these three intensities, and so a color filter array must be used to selectively filter a particular color for each pixel. But each pixel 'sees' some luminosity value for each color of light unless no light is being reflected to that pixel on the image sensor.

The color intensity values *not* captured for each pixel are what are interpolated by a software program called a Raw converter.
Digital cameras have a Raw converter application in them, or a Raw file can be converted outside the camera using the camera makers Raw converter or one of the many other Raw converter applications available.


----------



## 480sparky (May 9, 2016)

The 1/3 figure could be considered accurate if you assume 2/3 of the light striking a pixel is filtered out by the bayer array.


----------



## KmH (May 9, 2016)

Except that few colors are pure red, green, or blue.
Think about a very narrow band-pass filter, like an 0.9 angstrom Hydrogen Alpha filter often used to make astrophotographs of the Sun.
An angstrom is one ten-billionth of a meter.

The process of interpolating the color of a digital photo by use of a color filter array in front of the image sensor and a software algorithm is known as Demosaicing.



> The aim of a demosaicing algorithm is to reconstruct a full color image (i.e. a full set of color triples) from the spatially undersampled color channels output from the CFA. The algorithm should have the following traits:
> 
> Avoidance of the introduction of false color artifacts, such as chromatic aliases, zippering (abrupt unnatural changes of intensity over a number of neighboring pixels) and purple fringing
> Maximum preservation of the image resolution
> ...


----------



## KC1 (May 9, 2016)

Thanks for reading and replying, glad you got something out of it. It's all old news as I said, but still interesting.


----------

