Sensor size: how does it matter?

A forum to ask questions, post setups, and generally discuss anything having to do with photomacrography and photomicroscopy.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

Epidic
Posts: 137
Joined: Fri Aug 04, 2006 10:06 pm
Location: Maine

Post by Epidic »

Rik, yes, the math works. I read the link and it appeared that the focal length was scaled in respect to sensor size and magnification was ignored. I could have missed something about magnification, but it seems EI just needs to consider focal length. In that case, as soon as you start to focus, images are not equivalent. But I must have missed where the author talks about that. You, quite rightly, know that magnification changes things.

I guess my beef would beef is it only shows where things intersect, but not what happens when they diverge, which is more likely to happen. It seems to imply that there are no differences between systems. Your original post does the same as it assumes that "equivalent images" is the only proper way to approach the problem. I guess the only conclusion is that imaging systems can be similar and they can be different, but is that news?

It has been an interesting thought problem. I am a little settled in my ways where the final use and conditions for a system are unknown. I have spent the week thinking about using this idea of EI in practical terms. Yes, it can work, but it is cumbersome. It is kind of a one-trick pony.

Sorry about the post above - It is a little incoherent. I was typing quickly as I had an appointment and I did not want to think I had forgotten this.
Will

rjlittlefield
Site Admin
Posts: 23597
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Your original post does the same as it assumes that "equivalent images" is the only proper way to approach the problem.
Hmmm... Don't see that myself. Quite the contrary, it looks to me that I called out a fair number of things you could do differently with one camera versus the other.

Again, the real point of "equivalent images" analysis is that if you don't change the picture, by switching to a non-equivalent image, then sensor size doesn't matter. The advantage of a larger sensor is that you have a wider range of options.
Yes, it can work, but it is cumbersome. It is kind of a one-trick pony.
I generally find that it takes a while to get familar with a new pony and learn what kinds of tricks it's good at. I've been living with "outside the box" analysis techniques for a couple of years now, and I've come to appreciate that they're pretty handy. No offense intended, but when a person has just barely come to accept that a new analysis method is even correct, it might be a bit premature to conclude that it's not good for much.

The key insight behind "outside the box" and "equivalent images" analysis is that frequently it's better to concentrate on the light that forms the image, as opposed to the device that forms the image.

If you image the same field of view, through the same aperture, for the same amount of time, at the same illumination level, then you've captured the same light. Same light, same image -- sensor size doesn't matter (barring secondary issues like capture efficiency).

Likewise, if you fix the field of view and all the viewing parameters, then DOF depends only on the angular diameter of the entrance pupil -- what I often call the "cone angle" because that's how geometry textbooks talk about it. Same cone angle, same DOF; smaller angle, more DOF -- and also more blur from diffraction / spatial filtering. Again, sensor size doesn't matter (ignoring secondary issues like sin(theta) only approximately equal tan(theta)).

Here's an example of a question that's hard to answer with standard analysis based on f-number.

Suppose you take a picture of a spider at 6" distance and f/11. You move away and take a second picture of the same spider at 12" distance, using the same lens, still at f/11, then crop/enlarge the second picture so that it shows the same FOV. What happens to the DOF? Does it get larger, get smaller, or stay the same?

Using standard analysis, you'll just about go nuts trying to answer the question, and very likely get the wrong answer. As least that's what happens to me.

But using outside-the-box analysis, it's easy. The aperture diameter stays the same, the distance doubles, so the cone angle drops by half and the DOF doubles to match. In fact it's beyond easy, it's trivial. And it produces, IMO, a whole lot more insight than trying to follow the myriad of formulas that appear in classical analysis.

One way to think about all these models is that they just represent different parameterizations of the system. Some parameterizations are better than others for certain jobs. The classical parameterization in terms of f-number and sensor speed is great when you're given a sensor and a lens, and asked to calculate shutter speed. But for a lot of other tasks, it's better to think in terms of FOV, entrance pupil location, and cone angle. Different tools for different jobs.

--Rik

Post Reply Previous topicNext topic