What's new

Encouragement to shoot raw

While technically correct the misleading point comes when you convert that 14 bit file to an 8 bit JPEG for display or print it's truncated to 8 bit, deleting excess color data and detail. I think this is where Ysarex was going with the yardstick/ruler analogy.
But the key is with RAW and its greater radiometric resolution, there is more potential dynamic range for editing the midtones, shadows, light areas of the photos. With JPEG and only 0-255 possible RGB values, dark areas can get crushed, bright areas blown out relative to 0-16,383 with 14-bit RAW.
 
In each pixel, there is a red, green, blue value.
No. In an RGB image (JPEG) each pixel has a red, green and blue value. In a raw file each pixel has only a red or green or blue value.
The JPG file format is an 8-bit format, so the maximum possible RGB range is only 0-255.

A 14-bit RAW file format has a maximum possible RGB range of 0-16,383.
Again they are not like units and they both don't have comparable RGB units. Consider the analogy of a staircase. From the bottom of the staircase to the top a total distance is spanned. You're accustomed to staircases in which each step is the same height as all the other steps. In a raw file each step is an equal height to all the other steps however in a JPEG the steps are of variable height. They don't compare.

In fact raw files from different cameras don't compare. Camera A may produce a 14 bit raw file that spans a greater total distance than camera B's 14 bit raw file. The height of the steps in camera B's raw file are smaller -- not like units. And an 8 bit JPEG could span the same distance as camera A's raw file. Thinking staircase there is no equivalent stair height between them all -- not like units.
The technical term for this is "radiometric resolution" from the science of satellite sensors.
 
Anybody got a printer that prints good photo even in jpeg. All these scientific details are confusing!
 
Anybody got a printer that prints good photo even in jpeg. All these scientific details are confusing!
Even in JPEG? Are you suggesting something's wrong with the JPEG format? The JPEG format is print sufficient and works great as a final archive format -- very good for printing. Canon and Epson make excellent printers.
 
No. In an RGB image (JPEG) each pixel has a red, green and blue value. In a raw file each pixel has only a red or green or blue value.

Again they are not like units and they both don't have comparable RGB units. Consider the analogy of a staircase. From the bottom of the staircase to the top a total distance is spanned. You're accustomed to staircases in which each step is the same height as all the other steps. In a raw file each step is an equal height to all the other steps however in a JPEG the steps are of variable height. They don't compare.

In fact raw files from different cameras don't compare. Camera A may produce a 14 bit raw file that spans a greater total distance than camera B's 14 bit raw file. The height of the steps in camera B's raw file are smaller -- not like units. And an 8 bit JPEG could span the same distance as camera A's raw file. Thinking staircase there is no equivalent stair height between them all -- not like units.
Doesn;t the variable height on jpegs depend on how much compression you;ve selected?
 
Doesn;t the variable height on jpegs depend on how much compression you;ve selected?
No, it's from the applied tone curve. The data in a JPEG is adjusted to a standardized target output -- ideally a sheet of white paper with ink applied to create the image. The data in a raw file is linear but ultimately must likewise be adjusted to the same final target.
 
Last edited:
Consider the analogy of a staircase. From the bottom of the staircase to the top a total distance is spanned. You're accustomed to staircases in which each step is the same height as all the other steps. In a raw file each step is an equal height to all the other steps however in a JPEG the steps are of variable height. They don't compare.
Think you left out part of the analogy, the full analogy from "Photo Stack Exchange -

Black point and White point settings determine what is the highest linear value that will be considered "solid black" and what is the lowest linear value that will be considered "solid white". All values below the black point are converted to "0". All values above the white point are converted to the maximum value. For 8-bit, the maximum value is 2^8 - 1, or 255. For 16-bits the max value is 2^16 - 1, or 16,535. Note that the black and white point are the same in the raw values whether using 8-bit, 16-bit, or an even higher internal bit depth for use during processing. The difference between 8-bit and 16-bit at this point is a difference in the size of each step between consecutive values.

Think of it like a staircase: The black point is how many feet above ground level the bottom step is. The white point is how many feet above ground level the top step is. The bit depth is how many steps the staircase has. If we have a staircase that is 256 feet from the bottom to the top, at 8-bits (0-255 are 256 distinct values) each of the 256 steps would be one foot in height. If we have a staircase that is the same 256 feet from bottom to top, at 16-bits (0-16,535 or 16,536 distinct values) we would have 256 steps per foot! These small gradations are important when we do the next step."
 
Think you left out part of the analogy, the full analogy from "Photo Stack Exchange -

Black point and White point settings determine what is the highest linear value that will be considered "solid black" and what is the lowest linear value that will be considered "solid white". All values below the black point are converted to "0". All values above the white point are converted to the maximum value. For 8-bit, the maximum value is 2^8 - 1, or 255. For 16-bits the max value is 2^16 - 1, or 16,535. Note that the black and white point are the same in the raw values whether using 8-bit, 16-bit, or an even higher internal bit depth for use during processing. The difference between 8-bit and 16-bit at this point is a difference in the size of each step between consecutive values.

Think of it like a staircase: The black point is how many feet above ground level the bottom step is. The white point is how many feet above ground level the top step is. The bit depth is how many steps the staircase has. If we have a staircase that is 256 feet from the bottom to the top, at 8-bits (0-255 are 256 distinct values) each of the 256 steps would be one foot in height. If we have a staircase that is the same 256 feet from bottom to top, at 16-bits (0-16,535 or 16,536 distinct values) we would have 256 steps per foot! These small gradations are important when we do the next step."
But they got it technically wrong. Fair to assume they're referring to an RGB image (JPEG) when they say; "If we have a staircase that is 256 feet from the bottom to the top, at 8-bits (0-255 are 256 distinct values) each of the 256 steps would be one foot in height." If each of the 256 steps is one foot in height then the steps are all equal in size. In an 8 bit RGB image (JPEG) the steps are never equal in size -- some are bigger than others and some are smaller than others. Raw files (12 - 14 bit typically) are linear with all steps equal in size. RGB images (JPEGs specifically 8 bit) are non-linear with steps of varying size.
 
It is interesting how the original post generated so many comments. The OPs original post was to encourage people to try RAW. The OP considers the results worth the extra effort. This is a valid opinion.

Then the discussion turned to the merits of each process, and the fact that you can do significantly more image manipulation with RAW then with JPEG. This is a fact. But working with RAW is more complicated than working with JPEG. This is also a fact.

The choice comes down to the necessity of using RAW in everyday photography. It is a nice option to have available, However, if you can get the pictures, you envisioned with JPEG, do you really need to enhance it with RAW.
 
It is interesting how the original post generated so many comments. The OPs original post was to encourage people to try RAW. The OP considers the results worth the extra effort. This is a valid opinion.

Then the discussion turned to the merits of each process, and the fact that you can do significantly more image manipulation with RAW then with JPEG. This is a fact. But working with RAW is more complicated than working with JPEG. This is also a fact.
I don't see that last one. Maybe if you're willing to accept the SOOC JPEG as is and never edit JPEGs but the fact is most people who work with camera JPEGs also edit them later. And for them working with JPEG may be more complicated.
To start with there's all the JPEGy camera stuff you have to do before tripping the shutter. I get to ignore all that. Exposure is also a more complicated task shooting JPEG. So behind the camera saving only raw files I have a much simpler task to get to click.
Back at the computer if you're going to edit JPEGs that's more difficult than processing raw files. It's harder to fix something broken than to just do it right in the first place.
The choice comes down to the necessity of using RAW in everyday photography. It is a nice option to have available, However, if you can get the pictures, you envisioned with JPEG,
Well there's the rub then. When I take a photo I see no reason why I wouldn't want to do the best job possible and have that photo meet my IQ expectations. SOOC JPEGs and/or edited JPEGs don't meet my IQ expectations. And since I think saving and processing raw files is in fact easier and less complicated it's a no brainer for me.
do you really need to enhance it with RAW.
 
Here is the best illustration of one of the big advantages of RAW over JPEG.
I shot this one this morning and purposely didn't bump up the exposure to compensate for the back lit bird.
ZjaYDHoh.jpg

Because this was shot RAW with a D850 in crop mode, I had appr. 20mp to work with. The lens is a very affordable 70-300 4.5-5.6 @ 300mm f/5.6 1/640 iso 64. I was able to use the RAW only features of Light Room Classic to bring out the shadow details of the bird and tame down the sky background and then used the (RAW only) Ai enhance feature to bring out the detail. There was so little noise at iso 64 that it wasn't an issue, frame added with Ps. I doubt I could have done this if I had shot it JPEG.
l20h5qfh.jpg
 
So wwhich version did you actually see with your eye's?
Is what you saw with your eye, or the potential you see in what you can make of it that's important? A good image is not usually exactly what you see with your eye. But that being said, some are limited by what they saw with thier eye, and some are willing to use a "raw" as intended and look at it as raw material with endless possibilities., and understand their goal is to create the best image possible, not the best representation of what was there. Why in an artistic medium would you allow reality to control your imagination?

Then on the other side of this, a few years ago (I've now deleted the test files from my flickr) I did a jpeg of a sunset, and tried to match the jpeg out put with the raw and post processing software. I discovered that for that image, the jpeg software, using a clarity filter was better than what I could do with raw and post processing. I know, it's blasphemy, but I suspect every case is different. I shoot raw and jog on different memory cards now. The in camera jpeg engine was able to micro manage some parts of the image, I couldn't duplicate in post.

For example, a fall colours image, I'm driving along Hwy 60 and come across a beautiful group of maples are and blue in colour, and it impacts me enough to stop the car and take photo. I want to produce the same impact as the scene that got me out of my car. I see that the impact in the image off the camera is not the same as the impact tht got me to stop the car. A bit of tweaking, saturate and increase the luminance of the red and yellow channels, and oil, there's the impact that got me out of the car. I don't try and emulate what I saw, I try and emulate what I felt. Somehow, you have to make it about, not what you saw, but about what caught your eye. And that's often a small part of what was actually there.
2024-10-20-SAT-Fall-colours-Taaracks-3 by Norm Head, on Flickr

If I can do that it's a successful image.

IN the example right above this post, several stops of exposure have been added to the raw, I'd like to see what could have been done with a proper exposure. Jpeg does not hold up as well, unless your initial exposure is bang on. If you have to add a couple stops of exposure, it doesn't have the colour depth needed. But if the exposure is bang on, the results should be fine. SO the author is both right and wrong. With the right exposure (bracketing would have helped) the jpeg might have been up to scratch. With a poor exposure as the above jpeg, raw is your friend. By the way, you need to over expose 1-2 stops when your subject is back lit. That's why you had to increase exposure in your raw setting. For the jpeg, you would have had to overexpose as much as you did in the raw image(in post) in the camera setting. Someone recently suggested that if you don't know these things, being able to preview in the viewfinder finder of a mirrorless "what you see is what you get" display might have helped you.
 
Last edited:
But they got it technically wrong. Fair to assume they're referring to an RGB image (JPEG) when they say; "If we have a staircase that is 256 feet from the bottom to the top, at 8-bits (0-255 are 256 distinct values) each of the 256 steps would be one foot in height." If each of the 256 steps is one foot in height then the steps are all equal in size. In an 8 bit RGB image (JPEG) the steps are never equal in size -- some are bigger than others and some are smaller than others. Raw files (12 - 14 bit typically) are linear with all steps equal in size. RGB images (JPEGs specifically 8 bit) are non-linear with steps of varying size.
Are we talking luminosity steps or color? My understanding from the article quoted is that the "maximum" value for white and the lowest value for black is the same between RAW and a JPEG. Also my understanding is that a RAW file is only linear in an totally unaltered/unprocessed state. Once "any" processing occurs, (including that inserted by the camera's algorithm) during save it's no longer linear.
 
Are we talking luminosity steps or color? My understanding from the article quoted is that the "maximum" value for white and the lowest value for black is the same between RAW and a JPEG. Also my understanding is that a RAW file is only linear in an totally unaltered/unprocessed state. Once "any" processing occurs, (including that inserted by the camera's algorithm) during save it's no longer linear.
Luminosity. The maximum value for white and the lowest value for black is the same in the output target (print) which is the same for both. A raw file is unprocessed data and it's upper and lower limits are sensor saturation and noise floor. When they talked about raw having white and black points they're talking about processed data which is then no longer raw data. What they got wrong that I was pointing out was to claim that for an RGB image (JPEG - 8 bit) that the stored data is linear -- each step equal to each other step -- it is not.
 
Luminosity
So the data in the RAW file is a linear function of the amount of light in the scene where scaling an input by a factor means that the output is always scaled the same factor. That would be why white balance, exposure, shadow and hightlight recovery adjustments can be made without affecting the linear nature of the file, correct?
 

Most reactions

New Topics

Back
Top Bottom