Why go full frame?

Yes, the resolution is different. Therefore, according to you, everyone who tries to compare the differences between FX and DX is wasting their time in apples to oranges comparisons -- it's just resolution. Posted this earlier: graph -- that's the same sensor producing two different plots for DR (FX and DX). Can you explain why the difference in resolution is causing the DR difference? If it's not resolution what is it?
That was my question. If you using the same pixels only selecting part of the sensor's total, why would the signal noise ratios be different?
 
That was my question. If you using the same pixels only selecting part of the sensor's total, why would the signal noise ratios be different?
And you got this answer from Smoke: "My understanding is that there is no difference in light gathering ability at the pixel level. The SNR increase is because the total light collected on the frame is less." He is correct.

And you got the same answer from me: "So why does the low light performance degrade when you crop the sensor area? Because the SNR (signal noise ratio) worsens with a decrease in sensor area. The smaller sensor collects less total light."

That's the answer.

The DR graphs tell you the same thing. Here's the new Nikon Z8. Switch the sensor to DX mode and the DR capacity drops. Go to Photons to Photos and check every FF camera listed there that also has a crop mode listed. They will all tell you the same thing. DR capacity drops when the sensor area is cropped. Otherwise it's the same sensor. DR is noise limited. For any sensor the amount of usable DR is determined by the point where usable signal swamps in noise. For the smaller area sensor that occurs sooner. And the reason is that the smaller area sensor collects less total light which reduces SNR.
 
Last edited:
Your answer doesn't make sense to me. Sorry.
In your question you focus on the pixels and using the same pixels as if the difference must be due to or related to the pixels. It's not. The difference is a difference in noise in the image that is not sourced from the sensels/pixels. You may have previously heard that one reason FF cameras have better low light performance is because they have larger pixels and possibly that idea stuck with you. That's true but now meaningless. The noise that comes from the sensels is read noise. Go back 15 years and we could see examples of read noise in our images and see that larger sensels generated less read noise. Times change. They have engineered the read noise in our modern sensors down to a level that is insignificant.

The size of the sensels/pixels is responsible for a difference that you now can't see if even detect. So the fact that when an FX sensor is placed in DX mode the same pixels are still being used is pretty meaningless. Again we can look to sensor DR for verification of this: Canon R5-6 The R6 is a 24mp FF camera while the R5 is a 45mp FF camera. The sensels/pixels in the R5 are only half the size of those in the R6. The smaller sensels/pixels must be noisier but they're not. The DR plots for the two cameras overlay. Pixel size doesn't mean squat. It used to, but we fixed that.

Read noise was always a secondary source of less importance than shot noise which is the dominant source of noise in our images. The pixels and their size have nothing to do with shot noise. The noise is in the signal itself (the light) and the only way to reduce the noise is strengthen the signal (light) -- either more exposure and/or more total signal collected.

Back to the cookie tins in the rain analogy. The rain is dirty (the light is noisy). When we collect more water the dirt in the water is less visible. As we collect less water the dirt in the water becomes more visible. When we collect more light (by total area) the noise in the light is less visible (and the pixels aren't involved). When we collect less light (by total area) the noise in the light is more visible (and the pixels aren't involved).

Below is quoted from Richard Butler's article in DPReview on noise: What's that noise? Part one: Shedding some light on the sources of noise

[my bold] "There are three factors that affect how much light is available for your sensor to capture: your shutter speed, f-number and the size of your sensor.

...at the same f-number (both cameras set to F2.8), the full frame camera will see four times as much light as a camera with a Four Thirds sensor, since it is exposed to the same light-per-unit-area but has a sensor with four times the area.

As a result, when you shoot two different sized sensors with the same shutter speed, f-number and ISO, the camera with the smaller sensor has to produce the same final image brightness (which the ISO standard demands) from less total light. And, since we've established that capturing more light improves your signal-to-noise ratio, this means every output tone from the larger sensor will have a better signal-to-noise ratio, so will look cleaner."
 
Last edited:
In your question you focus on the pixels and using the same pixels as if the difference must be due to or related to the pixels. It's not. The difference is a difference in noise in the image that is not sourced from the sensels/pixels. You may have previously heard that one reason FF cameras have better low light performance is because they have larger pixels and possibly that idea stuck with you. That's true but now meaningless. The noise that comes from the sensels is read noise. Go back 15 years and we could see examples of read noise in our images and see that larger sensels generated less read noise. Times change. They have engineered the read noise in our modern sensors down to a level that is insignificant.

The size of the sensels/pixels is responsible for a difference that you now can't see if even detect. So the fact that when an FX sensor is placed in DX mode the same pixels are still being used is pretty meaningless. Again we can look to sensor DR for verification of this: Canon R5-6 The R6 is a 24mp FF camera while the R5 is a 45mp FF camera. The sensels/pixels in the R5 are only half the size of those in the R6. The smaller sensels/pixels must be noisier but they're not. The DR plots for the two cameras overlay. Pixel size doesn't mean squat. It used to, but we fixed that.

Read noise was always a secondary source of less importance than shot noise which is the dominant source of noise in our images. The pixels and their size have nothing to do with shot noise. The noise is in the signal itself (the light) and the only way to reduce the noise is strengthen the signal (light) -- either more exposure and/or more total signal collected.

Back to the cookie tins in the rain analogy. The rain is dirty (the light is noisy). When we collect more water the dirt in the water is less visible. As we collect less water the dirt in the water becomes more visible. When we collect more light (by total area) the noise in the light is less visible (and the pixels aren't involved). When we collect less light (by total area) the noise in the light is more visible (and the pixels aren't involved).

Below is quoted from Richard Butler's article in DPReview on noise: What's that noise? Part one: Shedding some light on the sources of noise

[my bold] "There are three factors that affect how much light is available for your sensor to capture: your shutter speed, f-number and the size of your sensor.

...at the same f-number (both cameras set to F2.8), the full frame camera will see four times as much light as a camera with a Four Thirds sensor, since it is exposed to the same light-per-unit-area but has a sensor with four times the area.

As a result, when you shoot two different sized sensors with the same shutter speed, f-number and ISO, the camera with the smaller sensor has to produce the same final image brightness (which the ISO standard demands) from less total light. And, since we've established that capturing more light improves your signal-to-noise ratio, this means every output tone from the larger sensor will have a better signal-to-noise ratio, so will look cleaner."
Sorry. Your explanation is too convoluted for me to understand. You're all over the place. Can you explain in ten words or less why signals are noisier when you select only a portion of the pixels of the same sensor?
 
explain in ten words or less why signals are noisier when you select only a portion of the pixels of the same sensor?

More pixels = more data (good signal). Total data/noise = SNR.

That's 10 words 😊
 
More pixels = more data (good signal). Total data/noise = SNR.

That's 10 words 😊
OK Great. Let;s start there: Does that require you zoom in with the lens so the image subject is the same for the cropped selection as it was for the full sensor selection? In effect the resolution is less and that accounts for more SNR?

What happens if you do not zoom in and you just capture a portion of the original full subject? Will the SNR of those pixels be worse, better, no change? If so, why?

Isn't the SNR pixel by pixel? What's happening that causes that to change if you select only a portion of the full sensor?

Just so I;m understanding what these cameras are doing, when you select crop mode, you're only using a smaller portion of the FF sensor. So for example, if the FF is 4000x3000, selecting crop will just capture let's say 3000x2250 resolution from the center of the same FF sensor?
 
Does that require you zoom in with the lens so the image subject is the same for the cropped selection as it was for the full sensor selection? In effect the resolution is less and that accounts for more SNR?
Zooming in with a lens doesn't change the total data collected on a full frame sensor.

Signal noise is generated by the scene and distributed across the scene/frame, its present in all images. Its recorded when sensor is exposed. If there's insufficient data presented to a pixel the noise is the only thing recorded. More total light (good signal) collected overides visible noise and SNR becomes less. When you click the shutter any scene noise is locked in your ratio set. Thats why when you crop a full frame image post the SNR stays the same.

understanding what these cameras are doing, when you select crop mode, you're only using a smaller portion of the FF sensor
Correct, but you have to remember that shot noise is a constant in all images if you're collecting less good total data from the scene (less pixels contributing) that makes the ratio of noise to data higher. Echoing Joe's analogy, take two identical glasses. Put a large dollop of chocolate syrup (noise) in each. Fill the first glass half full of milk (data), fill the second glass full. Stir them up. Which glass is the chocolate flavor more prominent. Ratio of chocolate (noise) to milk (data) is greater in the first glass.
 
Last edited:
Zooming in with a lens doesn't change the total data collected on a full frame sensor.

Signal noise is generated by the scene and distributed across the scene/frame, its present in all images. Its recorded when sensor is exposed. If there's insufficient data presented to a pixel the noise is the only thing recorded. More total light (good signal) collected overides visible noise and SNR becomes less. When you click the shutter any scene noise is locked in your ratio set. Thats why when you crop a full frame image post the SNR stays the same.


Correct, but you have to remember that shot noise is a constant in all images if you're collecting less good total data from the scene (less pixels contributing) that makes the ratio of noise to data higher. Echoing Joe's analogy, take two identical glasses. Put a large dollop of chocolate syrup (noise) in each. Fill the first glass half full of milk (data), fill the second glass full. Stir them up. Which glass is the chocolate flavor more prominent.
I still don't understand why selecting a portion of pixels from all the pixels changes the SNR of any particular pixel or group of them? Isn't the noise pixel determined so that larger formats with larger individual pixel elements have less noise and greater DR.
 
still don't understand why selecting a portion of pixels from all the pixels changes

See Joe's post 49 above. "at the same f-number (both cameras set to F2.8), the full frame camera will see four times as much light as a camera with a Four Thirds sensor, since it is exposed to the same light-per-unit-area but has a sensor with four times the area".

It's the combined total of all the pixels.
 
See Joe's post 49 above. "at the same f-number (both cameras set to F2.8), the full frame camera will see four times as much light as a camera with a Four Thirds sensor, since it is exposed to the same light-per-unit-area but has a sensor with four times the area".

It's the combined total of all the pixels.
You're comparing two different sensors- full frame vs micro 4/3. . That answer doesn't address my question about the difference between crop mode and full mode with the same sensor?
Quote: "I still don't understand why selecting a portion of pixels from all the pixels changes the SNR of any particular pixel or group of them?".
 
OK Great. Let;s start there: Does that require you zoom in with the lens so the image subject is the same for the cropped selection as it was for the full sensor selection?
Yes. You're trying to compare looking for differences in low light performance. Step one of that process is to take the same photo under controlled conditions isolating as best possible the variable you're testing for -- in this case sensor size.
In effect the resolution is less and that accounts for more SNR?

What happens if you do not zoom in and you just capture a portion of the original full subject? Will the SNR of those pixels be worse, better, no change? If so, why?
This is the duuuuuh moment here. If you compare a DX sensor with itself it's a good bet you'll find that it is itself. When you switch from FX to DX mode you crop the image so you need to zoom out to take the same photo that you captured with the FX sensor. If you don't what will you compare to the DX image? A cropped image from the FX sensor? That's a duuuuuuh.
Isn't the SNR pixel by pixel?
No. What constitutes the signal? Answer: light. What is the source of the noise that you're calculating a ratio for with the signal? Answer light -- shot noise is part of the light. How many pixels doesn't change how much noise is in the light. The noise is proportional (ratio) to the total amount of light and so with more light (stronger signal) we see proportionately less noise. Stronger signal is BOTH more light per unit area (exposure intensity) as well as more light collected in total over total area. The size and number of pixels isn't playing a significant role. If the size and number of pixels mattered significantly here why is the low light performance of the Canon R5 and R6 essentially the same? The R6 pixels are twice as big as the R5 pixels and the R5 has twice as many pixels as the R6. If SNR is pixel by pixel wouldn't those two cameras have different SNR performance because one has twice as many pixels or because one has pixels that are twice as big?
What's happening that causes that to change if you select only a portion of the full sensor?
Change in sensor area changes the total amount of light collected which changes SNR.

Back to DPReview's articles: The effect of pixel size on noise That article begins: "The total amount of light that goes to make up your image is the most important factor in determining image quality.

...f-numbers dictate the light intensity of an exposure (light per unit area). However, this ignores the sensor size. To understand how much total light is available to make up your image, you need to multiply this light per unit area by the area of your sensor.

Do this and you'll discover that sensor size is much more important than pixel size." [my bold]

Just so I;m understanding what these cameras are doing, when you select crop mode, you're only using a smaller portion of the FF sensor. So for example, if the FF is 4000x3000, selecting crop will just capture let's say 3000x2250 resolution from the center of the same FF sensor?
Yes.
 
I still don't understand why selecting a portion of pixels from all the pixels changes the SNR of any particular pixel or group of them? Isn't the noise pixel determined
No. Absolutely No. Neither the number of nor size of the pixels is responsible for the noise. The noise (shot noise) is in the light. The pixels can contribute read noise, BUT read noise has always been much much much less than the shot noise in the signal (light) and that was before we've recently engineered the read noise almost entirely away.
so that larger formats with larger individual pixel elements have less noise and greater DR.
Larger formats have less noise and greater DR because the formats are larger and collect more total light. Once again, why would these two cameras (Canon R5/R6) have essentially the same DR and low light performance when one of them has pixels twice the size of the other? They have the same DR and low light performance because they have the same size sensors.

Back to DPReview's articles: The effect of pixel size on noise That article begins: "The total amount of light that goes to make up your image is the most important factor in determining image quality.

...f-numbers dictate the light intensity of an exposure (light per unit area). However, this ignores the sensor size. To understand how much total light is available to make up your image, you need to multiply this light per unit area by the area of your sensor.

Do this and you'll discover that sensor size is much more important than pixel size." [my bold]
 
You're comparing two different sensors- full frame vs micro 4/3. . That answer doesn't address my question about the difference between crop mode and full mode with the same sensor?
Quote: "I still don't understand why selecting a portion of pixels from all the pixels changes the SNR of any particular pixel or group of them?".

I think maybe you're looking at it on a pixel level. If you use crop mode on a full frame, the light collect on each individual pixel is identical to that on full frame, but noise is a function of the total light per area of sensor. More pixels contributing = more total light = less "visable" noise in the frame.
 
I think maybe you're looking at it on a pixel level. If you use crop mode on a full frame, the light collect on each individual pixel is identical to that on full frame, but noise is a function of the total light per area of sensor. More pixels contributing = more total light = less "visable" noise in the frame.
OK Got it. But do you know why that happens. If the light on each pixel is the same, what's causing the "visable" noise overall? Is it in the amploifiers, the assembly, or what? What do you mean "visable" noise as opposed to what other kind?
 

Most reactions

New Topics

Back
Top