That's interesting. Is there a test of that somewhere on the internet?CrispyBee wrote: ↑Fri Mar 08, 2024 11:37 am...
EDIT: for me the most striking examples were the hardware-based pixel binning CCD sensors made by DALSA. There was a clear difference in noise profile and dynamic range between the "full res" images and the ones made using pixel binning, even when comparing both at the same (lower) resolution (downscaled)
Sensor size and depth of field
Moderators: Chris S., Pau, Beatsy, rjlittlefield, ChrisR
Re: Sensor size and depth of field
Re: Sensor size and depth of field
Honestly I don't know. These backs were from before 2010, I think the last back with pixel binning was the P65+ (60MP 53,9 x 40,4mm).Lou Jost wrote: ↑Fri Mar 08, 2024 2:05 pmThat's interesting. Is there a test of that somewhere on the internet?CrispyBee wrote: ↑Fri Mar 08, 2024 11:37 am...
EDIT: for me the most striking examples were the hardware-based pixel binning CCD sensors made by DALSA. There was a clear difference in noise profile and dynamic range between the "full res" images and the ones made using pixel binning, even when comparing both at the same (lower) resolution (downscaled)
I remember some camera magazines from back then mentioning the back and showing some test, but I did some with mine when I got it almost 10 years ago (time flies) in which the binnned images (Sensor+) showed a noticeable advantage in dynamic range and an overall cleaner image.
I do have some LCC files stored - not useful for dynamic range observations but very good for comparing colour noise differences - all sorts of noise reduction were turned off and no sharpening was applied:
regular ISO 400 resized to 50%:

ISO 400 using Sensor+ (pixel binning) at 100%:

To me the Sensor+ image is much cleaner and more homogenous, far fewer yellow spots/patches and the edges were much better - that and the increased dynamic range were pretty good reasons so use it at/beyond ISO400. And as I said, no noise reduction is applied.
- rjlittlefield
- Site Admin
- Posts: 24352
- Joined: Tue Aug 01, 2006 8:34 am
- Location: Richland, Washington State, USA
- Contact:
Re: Sensor size and depth of field
rjlittlefield wrote: ↑Thu Mar 07, 2024 12:06 pmGenerally I prefer to avoid explanations that are based on pixel size because that opens another can of worms.
That page of Clark's is exactly what I was thinking of when I said "another can of worms".
He spends the top half of that page discussing the equivalent images principle, noting that "noise in good modern digital cameras is dominated by photon counting statistics, not other sources", eventually summarizing the explanation with some experimental images, which are captioned with this:
I agree completely with all that, and I consider it to be the most important message.Conclusions: When aperture diameters are the same and pixels on the subject in the output presentation are the same, the Etendue's are equal and the images show the same depth of field, and the same S/N regardless of sensor size or focal length (A and B).
...
Further, if the ISO is set so that both cameras digitize the same electron range, then the brightness in the image files will also be the same. The images will be virtually identical (A and B)!
Then in the next paragraph he starts talking about dynamic range versus pixel size, eventually writing that "larger pixels have greater dynamic range at all ISOs, beating smaller pixel cameras."
Please note the conflict: in the first part he says "same S/N regardless of sensor size", which with same pixels on subject also means same S/N regardless of pixel size, but in the second part he says that "larger pixels have greater dynamic range".
Assuming that both these statements are true, then it must be that dynamic range is not the same as signal-to-noise ratio. Fair enough, it's not. Dynamic range reflects how dark an area can be and still show detail. That's determined by how the sensor responds to just a few photons, and if you read the fine print in the first half, Clark also writes that "In fact, all modern digital cameras tested in the last few years have been shown to be photon noise limited at signal levels above a few tens of photons." [italics added] In other words, at signal levels below a few tens of photons, noise is affected by factors other than photon counting, and that noise can depend on pixel size in ways that are not compensated by adjusting to constant area on subject.
So sure, I'm happy to grant that dynamic range may be greater with larger pixels.
But do I care about that sort of dynamic range? It turns out that I do not. That's because, in my shooting, there is seldom a situation where the darks I care about get down into that few-tens-of-photons range. If nothing else, scattered light (veiling glare) adds a pedestal to the raw histogram, which pushes up the darks into the range where S/N (signal-to-noise ratio) determines what I can see. So for me, the usable dynamic range is actually determined by S/N, which is limited by photon noise and not pixel size per se.
Did I mention a can of worms?
--Rik
Re: Sensor size and depth of field
But this supports my main point: four small pixels taken together have the same noise as one large pixel that covers the same area as those four small pixels. There is no noise penalty to smaller pixels. Rik's post above makes the same point, though noting that this only applies to the photon noise we have been discussing; other kinds of pixel noise don't necessarily follow this rule."To me the Sensor+ image is much cleaner and more homogenous, far fewer yellow spots/patches and the edges were much better - that and the increased dynamic range were pretty good reasons so use it at/beyond ISO400. And as I said, no noise reduction is applied."
Re: Sensor size and depth of field
I don't see a conflict, he just wrote that increasing the sensor dimension (without changing anything else) wouldn't improve S/N on an individual pixel level as the pixel to subject size stays the same.rjlittlefield wrote: ↑Fri Mar 08, 2024 7:45 pm
Please note the conflict: in the first part he says "same S/N regardless of sensor size", which with same pixels on subject also means same S/N regardless of pixel size, but in the second part he says that "larger pixels have greater dynamic range".
Assuming that both these statements are true, then it must be that dynamic range is not the same as signal-to-noise ratio.
This would also explain the sill observable reduction in dynamic range when using cameras in a crop-sensor mode, as you'll have to use a lower magnification for the same subject, effectively reducing the pixel size on your subject:

Lou Jost wrote: ↑Fri Mar 08, 2024 10:15 pmBut this supports my main point: four small pixels taken together have the same noise as one large pixel that covers the same area as those four small pixels. There is no noise penalty to smaller pixels. Rik's post above makes the same point, though noting that this only applies to the photon noise we have been discussing; other kinds of pixel noise don't necessarily follow this rule."To me the Sensor+ image is much cleaner and more homogenous, far fewer yellow spots/patches and the edges were much better - that and the increased dynamic range were pretty good reasons so use it at/beyond ISO400. And as I said, no noise reduction is applied."
I think it's easier to see when the images are directly side by side:
Sensor+ 100% / regular 100%

Sensor + 100% / regular 50%

To me the Sensor+ has both less luma and less chroma noise - and as I mentioned before the dynamic range is also better.
However this has to be seen in context with the CCD technology, which has always suffered from a pretty high readout noise even at the lowest ISO setting.
And as I mentioned before, ISO400 was still pretty "tame" so perhaps it's not the ideal comparison, ISO800+ was much worse but I don't have any full res images shot at ISO 800+, only Sensor+ mode images.
Re: Sensor size and depth of field
But isn't binning just averaging four pixels together? (That's the way my monochrome cameras implement binning.) If so, you are showing that noise from individual small pixels averages out when they are combined. I suspect that your method of reducing the resolution of the individual pixels (in the second column of your second test) is using complex algorithms to do the job, and thus amplifying accidental correlations between pixels. If it is a general purpose reduction algorithm, it can't just average each block of four pixels, because such an algorithm would not work for arbitrary reduction percentages.
Re: Sensor size and depth of field
From my understanding it's combining the well readout, effectively increasing the full well potential considerably (though it's of course at a rather high ISO for a CCD sensor so the noise floor is pretty high anyway).Lou Jost wrote: ↑Mon Mar 11, 2024 10:15 amBut isn't binning just averaging four pixels together? (That's the way my monochrome cameras implement binning.) If so, you are showing that noise from individual small pixels averages out when they are combined. I suspect that your method of reducing the resolution of the individual pixels (in the second column of your second test) is using complex algorithms to do the job, and thus amplifying accidental correlations between pixels. If it is a general purpose reduction algorithm, it can't just average each block of four pixels, because such an algorithm would not work for arbitrary reduction percentages.
If it was just averaging out the pixels the blotchiness and colour noise would still appear in the same way, it wouldn't just go away due to averaging as these blotches are far larger and span over more than 4 pixels.
Even when using Photoshop with different algorithms it won't get rid of the blotches, that would require some extra work.
Re: Sensor size and depth of field
Combining the well read-out is essentially averaging those four pixels, isn't it? Yes, the colors should go away, on the average. There should be no correlation between neighboring blocks of four pixels. On the other hand, the Photoshop algorithm is probably looking for patterns, and may interpret accidental clusters of color as real patterns. rather than averaging them out. I don't really know. But in this application, a strict binning algorithm should produce smoother results than an algorithm that is sensitive to patterns.CrispyBee wrote: ↑Mon Mar 11, 2024 10:22 amFrom my understanding it's combining the well readout, effectively increasing the full well potential considerably (though it's of course at a rather high ISO for a CCD sensor so the noise floor is pretty high anyway).Lou Jost wrote: ↑Mon Mar 11, 2024 10:15 amBut isn't binning just averaging four pixels together? (That's the way my monochrome cameras implement binning.) If so, you are showing that noise from individual small pixels averages out when they are combined. I suspect that your method of reducing the resolution of the individual pixels (in the second column of your second test) is using complex algorithms to do the job, and thus amplifying accidental correlations between pixels. If it is a general purpose reduction algorithm, it can't just average each block of four pixels, because such an algorithm would not work for arbitrary reduction percentages.
If it was just averaging out the pixels the blotchiness and colour noise would still appear in the same way, it wouldn't just go away due to averaging as these blotches are far larger and span over more than 4 pixels.
Even when using Photoshop with different algorithms it won't get rid of the blotches, that would require some extra work.
Edit: I read more about this now in astrophotography forums. It seems some sensors bin before read-out, while others bin after read-out. In the latter case, there is more read noise than in the first case. It could be that your sensor is indeed reducing noise by binning before readout. However, I still think that Photoshop's reduction algorithms are not neutral and are amplifying accidental patterns.
Re: Sensor size and depth of field
Lou Jost wrote: ↑Mon Mar 11, 2024 5:44 pmCombining the well read-out is essentially averaging those four pixels, isn't it? Yes, the colors should go away, on the average. There should be no correlation between neighboring blocks of four pixels. On the other hand, the Photoshop algorithm is probably looking for patterns, and may interpret accidental clusters of color as real patterns. rather than averaging them out. I don't really know. But in this application, a strict binning algorithm should produce smoother results than an algorithm that is sensitive to patterns.CrispyBee wrote: ↑Mon Mar 11, 2024 10:22 amFrom my understanding it's combining the well readout, effectively increasing the full well potential considerably (though it's of course at a rather high ISO for a CCD sensor so the noise floor is pretty high anyway).Lou Jost wrote: ↑Mon Mar 11, 2024 10:15 amBut isn't binning just averaging four pixels together? (That's the way my monochrome cameras implement binning.) If so, you are showing that noise from individual small pixels averages out when they are combined. I suspect that your method of reducing the resolution of the individual pixels (in the second column of your second test) is using complex algorithms to do the job, and thus amplifying accidental correlations between pixels. If it is a general purpose reduction algorithm, it can't just average each block of four pixels, because such an algorithm would not work for arbitrary reduction percentages.
If it was just averaging out the pixels the blotchiness and colour noise would still appear in the same way, it wouldn't just go away due to averaging as these blotches are far larger and span over more than 4 pixels.
Even when using Photoshop with different algorithms it won't get rid of the blotches, that would require some extra work.
Edit: I read more about this now in astrophotography forums. It seems some sensors bin before read-out, while others bin after read-out. In the latter case, there is more read noise than in the first case. It could be that your sensor is indeed reducing noise by binning before readout. However, I still think that Photoshop's reduction algorithms are not neutral and are amplifying accidental patterns.

I thought you meant averaging as in combining the pixel values post-readout, on the Dalsa-Sensors this happens before readout.
Re: Sensor size and depth of field
My brain hurts...
Re: Sensor size and depth-of-field
Well—I see I'm almost one year late to this discussion. Yet I'd like to add a few cents ...
.
.
Consider this: If I want to travel a short distance, maybe a couple of yards up to, say, half a mile or so then I'd walk. To travel a longer distance upt to, say, 10 miles the I'd ride my bicycle. For a distance of 20 or 30 miles I'd get into my car an d drive it on local roads, for distance up to, say, 300 miles, on the highway. For even longer distances I'd hop on an airplane. So I'd keep the travel time more or less constant ... within several minutes to a few hours (but not days or weeks).
Would it be fair to say travel time does not depend on the means of locomotion? Because it's always the same? Of course not. But that's what you just did, Rik, when you said, 'sensor size does not matter.'
As a matter of fact, sensor size does matter. Larger sensors, per se, yield more depth-of-field. Why? Because the larger sensor requires less magnification for the same final image size, hence will get away with larger circles-of-confusion. So depth-of-field is proportional to the linear sensor size, i. e. the length of its diagonal.
But then, in order to get the same framing from the same distance, the larger sensor requires a longer focal length. And focal length, in turn, also matters. Longer focal length yields less depth-of-field—and more less (remember the 'inner child'?) than the larger sensor yields more. Equivalent focal length is proportional to linear sensor size but depth-of-field is inversely proportional to the square of the focal length.
So when using the larger sensor in combination with an equivalent (rather than equal) focal length, you'll end up with less depth-of-field (at the same distance and the same f-number). To compensate, the longer lens on the larger sensor needs to be stopped down more. And as it turns out, the apertures required to arrive at the same depths-of-field for different sensor sizes are those that correspond to equal entry pupil diameters.
So yes—you can get the same framing and the same depths-of-field and the same diffraction blur at the same time with different sensor sizes. But that doesn't mean sensor size wouldn't matter. Just the contrary is true: you need to carefully accommodate both focal length and aperture to the sensor size because sensor size matters.
... and oh, by the way: pixel size has absolutely nothing to do with all of this.
.
This is correct, technically. But unfortunately it is delusive and highly capable of being misunderstood. Sensor size does not matter under the given preconditions. Yet, sensor size is one of the four primary parameters depth-of-field does depend on (the other three being focus distance, focal length, and relative aperture a.k.a. f-number).rjlittlefield wrote: ↑Mon Mar 04, 2024 3:14 pmWhat I personally like to keep constant is the framing, the final image size, and the observer's tolerance for blur in the final image. Probably we can all agree that those are good things to keep constant.
With those factors held constant, then DOF depends only on the angular width of the cone of light that gets through the entrance pupil of the lens and forms the image.
The angular width of that cone depends in part on f-number. If you stop down then the angle gets narrower, giving you more DOF and simultaneously more diffraction blur. Opening wider gives you less of both.
That tradeoff between DOF and diffraction blur is the same for all lens lengths and sensor sizes, assuming again that you hold constant the framing and the final image size. Sensor size does not matter.
.
See? There you are ... you planted wrong ideas in Tony's head.
Consider this: If I want to travel a short distance, maybe a couple of yards up to, say, half a mile or so then I'd walk. To travel a longer distance upt to, say, 10 miles the I'd ride my bicycle. For a distance of 20 or 30 miles I'd get into my car an d drive it on local roads, for distance up to, say, 300 miles, on the highway. For even longer distances I'd hop on an airplane. So I'd keep the travel time more or less constant ... within several minutes to a few hours (but not days or weeks).
Would it be fair to say travel time does not depend on the means of locomotion? Because it's always the same? Of course not. But that's what you just did, Rik, when you said, 'sensor size does not matter.'
As a matter of fact, sensor size does matter. Larger sensors, per se, yield more depth-of-field. Why? Because the larger sensor requires less magnification for the same final image size, hence will get away with larger circles-of-confusion. So depth-of-field is proportional to the linear sensor size, i. e. the length of its diagonal.
But then, in order to get the same framing from the same distance, the larger sensor requires a longer focal length. And focal length, in turn, also matters. Longer focal length yields less depth-of-field—and more less (remember the 'inner child'?) than the larger sensor yields more. Equivalent focal length is proportional to linear sensor size but depth-of-field is inversely proportional to the square of the focal length.
So when using the larger sensor in combination with an equivalent (rather than equal) focal length, you'll end up with less depth-of-field (at the same distance and the same f-number). To compensate, the longer lens on the larger sensor needs to be stopped down more. And as it turns out, the apertures required to arrive at the same depths-of-field for different sensor sizes are those that correspond to equal entry pupil diameters.
So yes—you can get the same framing and the same depths-of-field and the same diffraction blur at the same time with different sensor sizes. But that doesn't mean sensor size wouldn't matter. Just the contrary is true: you need to carefully accommodate both focal length and aperture to the sensor size because sensor size matters.
... and oh, by the way: pixel size has absolutely nothing to do with all of this.
Re: Sensor size and depth-of-field
That's not correct. Focal length has nothing to do with depth of field at all, that only depends on magnification and aperture.
You'll get the same DOF with a 25mm at f8 as you get with a 250mm at f8 as long as you keep the magnification the same.
You ignored the fact that you also need to keep the subject the same size on the print (and the print size the same as well).
In order to do that you have to fill the frame in a similar way, meaning you have to use a higher magnification while taking the image (for example by using a longer focal length while keeping the distance the same = higher magnification).
Otherwise the subject will fill a smaller proportional area of the image and then you have to crop the image and use the same magnification while printing as you would with the smaller format. But that would also result in the same DOF as you've not changed anything...sooo that doesn't work.
So yes, you'd have to stop down in order to get the same DOF, but not because of the focal length but because of the magnification while taking the image.
Naturally that has some consequences for macro photography, mostly with increasing the effective aperture and hence reducing the light and resolution the sensor has to work with, which pretty much balances things out up to a certain point. (Which is where the pixel size comes into play - and yes, it does matter)
So yeah, you could stop down on larger sensors...of you could open up the aperture on smaller sensors. In theory. In practice that's not always possible, advisable or desired.
Or in other words: you can't scale up/down where you want to in order to further you argument, or else you end up flying on your bike a couple of yards.
You'll get the same DOF with a 25mm at f8 as you get with a 250mm at f8 as long as you keep the magnification the same.
You ignored the fact that you also need to keep the subject the same size on the print (and the print size the same as well).
In order to do that you have to fill the frame in a similar way, meaning you have to use a higher magnification while taking the image (for example by using a longer focal length while keeping the distance the same = higher magnification).
Otherwise the subject will fill a smaller proportional area of the image and then you have to crop the image and use the same magnification while printing as you would with the smaller format. But that would also result in the same DOF as you've not changed anything...sooo that doesn't work.
So yes, you'd have to stop down in order to get the same DOF, but not because of the focal length but because of the magnification while taking the image.
Naturally that has some consequences for macro photography, mostly with increasing the effective aperture and hence reducing the light and resolution the sensor has to work with, which pretty much balances things out up to a certain point. (Which is where the pixel size comes into play - and yes, it does matter)
So yeah, you could stop down on larger sensors...of you could open up the aperture on smaller sensors. In theory. In practice that's not always possible, advisable or desired.
Or in other words: you can't scale up/down where you want to in order to further you argument, or else you end up flying on your bike a couple of yards.
Re: Sensor size and depth-of-field
That's a common misconception among photomacrographers (who always shoot at high image magnifications) and wildlife photographers (who always shoot telephoto lenses).
.
O rlly?
Let's consult a depth-of-field calculator. Shooting a 25 mm lens on a 35-mm-format camera (COC = 0.0288 mm) at f/8 and a distance of 1.3 m (approx. 4' 3") will yield a magnification of 1:50 (or 0.02×) and a depth-of-field of 1.49 m. Switching to a 250 mm lens on the same camera and shooting the same subject from a distance of 13 m, again at f/8, will yield the same magnification of 1:50 but a depth-of-field of 1.18 m—significantly less than 1.49 m, don't you think? But wait until we go to 1:100, or 0.01×!
For a magnification of 1:100, we need a distance of 2.55 m (approx. 8' 4") with our 25 mm lens on a 35-mm-format camera. At f/8, that's pretty close to the hyperfocal distance so we'll get a huge depth-of-field of more than 30 m (approx. 1.3 m to 32 m). With a 250 mm lens at f/8, however, we'd have to shoot from a distance of 25.5 m and get a depth-of-field of merely 4.69 m.
So much for 'focal length has nothing to do with depth-of-field at all.' To combine focal length and focus distance into one parameter, image magnification, is a simplification and an approximation that is applicable only when the depths-of-field are small in relation to the object field's width and height—in other words, when shooting at small angles-of-view and/or high image magnifications—but not in general.
.
Oh please! I didn't ignore anything. I suggest you re-read my previous post but more carefully so than you did the first time.
.
Right. What makes you miss the fact that I did say exactly the same?CrispyBee wrote: ↑Thu Feb 13, 2025 4:33 am... you also need to keep the subject the same size on the print (and the print size the same as well). In order to do that you have to fill the frame in a similar way, meaning you have to use a higher magnification while taking the image (for example by using a longer focal length while keeping the distance the same = higher magnification).
.
Duh! At the same distance—which is our precondition here, remember?—changing magnifiction and changing focal length are the very same thing. So your statement ('not because ... but because ...') doesn't make any sense.
.
At the same exposure, the smaller sensor will produce a noisier image because it will capture a smaller amount of light, according to its smaller area. At equivalent exposures, i. e. same shutter speed and smaller aperture for the larger sensor with longer focal length for same depths-of-field, both sensors will capture the same amount of light. Hence equal noise levels. Pixel sizes have no place in these relationships. Sensor sizes do.
.
Correct. That's where real-life conditions and limitations interfere with theory. And that's why larger image formats usually yield better image quality and why smaller formats still are the better choice for photomacrography in many cases.
But that's not my point. I just wanted to refute the false notion expressed earlier in this thread that sensor size didn't matter in terms of depth-of-field. It does!
.
Huh! That's funny, thanks for the laugh.


Re: Sensor size and depth of field
Well you didn't keep the magnification at the same level.O rlly?
Let's consult a depth-of-field calculator. Shooting a 25 mm lens on a 35-mm-format camera (COC = 0.0288 mm) at f/8 and a distance of 1.3 m (approx. 4' 3") will yield a magnification of 1:50 (or 0.02×) and a depth-of-field of 1.49 m. Switching to a 250 mm lens on the same camera and shooting the same subject from a distance of 13 m, again at f/8, will yield the same magnification of 1:50 but a depth-of-field of 1.18 m—significantly less than 1.49 m, don't you think? But wait until we go to 1:100, or 0.01×!
For a magnification of 1:100, we need a distance of 2.55 m (approx. 8' 4") with our 25 mm lens on a 35-mm-format camera. At f/8, that's pretty close to the hyperfocal distance so we'll get a huge depth-of-field of more than 30 m (approx. 1.3 m to 32 m). With a 250 mm lens at f/8, however, we'd have to shoot from a distance of 25.5 m and get a depth-of-field of merely 4.69 m.
So much for 'focal length has nothing to do with depth-of-field at all.' To combine focal length and focus distance into one parameter, image magnification, is a simplification and an approximation that is applicable only when the depths-of-field are small in relation to the object field's width and height—in other words, when shooting at small angles-of-view and/or high image magnifications—but not in general.
Naturally we point to magnification as that is the result of both focal length and distance to subject, which is the correct way of doing it - saying it's just due to a longer focal length is plainly wrong, nothing more nothing less. This is just imprecise and misleading language on your end.

The problem with your entire argument is that it only looks at parts of the process, not the whole, which is why some misconceptions remain - for example:
You disregarded the pixel size, yet claim
The fact that the larger sensor doesn't automatically get "more" light eludes you. With the same exposure it won't get any benefit at a higher magnification, it's more likely the opposite. And are you completely ignoring the resolution of the sensors you're comparing or is it just convenient not to take that into account? Because that's where you get the relationship between pixel size, pixel density and noise level (at least when you compare similar sensors form a similar generation)." At the same exposure, the smaller sensor will produce a noisier image because it will capture a smaller amount of light, according to its smaller area. At equivalent exposures, i. e. same shutter speed and smaller aperture for the larger sensor with longer focal length for same depths-of-field, both sensors will capture the same amount of light. Hence equal noise levels. Pixel sizes have no place in these relationships. Sensor sizes do."
With film it was much easier to say "larger film = smaller grain" - but with digital sensors their generation and technology does play a huge part. If you were to take the CCD sensors of a PhaseOne IQ160 (or P65+) and compare its noise levels to a Sony A7rV, the A7RV would run circles around that even though it's a much smaller sensor. Even the larger IQ100 and 150MP backs don't seem to have that great of a noise floor when compared to a far smaller GFX/Hasselblad 100MP sensor.
But generally speaking you won't know the difference between an image shot with a GFX100 or a Sony A7RV when pixel-peeping - as long as the lens is good and shooting parameters are good it's really close to a wash.
However in the macro/micro range the lens selection mostly dictates the format. Some of the best lenses for high magnifications don't even cover full frame completely, let alone larger sensors so that'll exclude the options right away. Trying to get those to cover larger sensors with good edge of corner performance is not a great experience. And sure, the required additional magnification isn't great either..