Testing a Sony A7sii for macro

Images of undisturbed subjects in their natural environment. All subject types.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

gardenersassistant
Posts: 190
Joined: Sun May 31, 2009 5:21 am
Location: North Somerset, England

Testing a Sony A7sii for macro

Post by gardenersassistant »

I took delivery of a lightly used Sony A7sii a couple of days ago. Here are a few images from a test session yesterday. (1300 pixel high versions of the 93 images I kept from the test session are in this album at Flickr.https://www.flickr.com/photos/gardeners ... 207272666/

All were hand-held with a Laowa 100mm 2X and a pair of Kenko 2X teleconverters, with a Yongnuo 24EX twin flash.

The nominal aperture was a fixed f/45 (taking account of the 2X teleconverters). With nominal f/45 the effective apertures of the setup range from around f/56 at 1:1 to f/135 at 8:1, although in this test session I did not use that much magnification. (Although the rig gives me infinity focus, 1:1 is about the smallest magnification I can use with flash because the working distance is increasing fast by 1:1 and the subject gets too far away for the flash to be practical much beyond 1:1).

Post processing from raw was done using a preset in DXO PhotoLab, image-specific processing in Lightroom and output processing with two runs of Topaz DeNoise AI using different methods.

#1
Image
1892 11 2021_05_16 DSC00164_PLab4 LR 1300h DNAI DNAIc by gardenersassistant, on Flickr

#2
Image
1892 16 2021_05_16 DSC00230_PLab4 LR 1300h DNAI DNAIc by gardenersassistant, on Flickr

#3
Image
1892 33 2021_05_16 DSC00354_PLab4 LR 1300h DNAI DNAIc by gardenersassistant, on Flickr

#4
Image
1892 35 2021_05_16 DSC00372_PLab4 LR 1300h DNAI DNAIc by gardenersassistant, on Flickr

#5
Image
1892 93 2021_05_16 DSC00618_PLab4 LR 1300h DNAI DNAIc by gardenersassistant, on Flickr
Nick

Flickr
Blog
Journey since 2007

Rework and reposts of my images posted in this forum are always welcome, especially if they come with an explanation of what you did and how you did it.

pawelfoto
Posts: 90
Joined: Tue Mar 09, 2021 2:51 pm
Location: Poland

Re: Testing a Sony A7sii for macro

Post by pawelfoto »

I am delighted with your photos. great colors, sharpness and surprisingly large depth of field. I really like little "secondary heroes". I believe it wasn't easy to catch the focus in the field. I have experienced that models do not want to stand still. How long does such a photo session take for you? I wonder how many unsuccessful photos go to the basket for one hit. What kind of diffusers do you use on speedlite? Is all the light from the flash or the background is partially natural (what shutter speed and ISO?)
== best, Pawel :smt038

gardenersassistant
Posts: 190
Joined: Sun May 31, 2009 5:21 am
Location: North Somerset, England

Re: Testing a Sony A7sii for macro

Post by gardenersassistant »

pawelfoto wrote:
Tue May 18, 2021 3:34 am
I am delighted with your photos. great colors, sharpness and surprisingly large depth of field. I really like little "secondary heroes". I believe it wasn't easy to catch the focus in the field. I have experienced that models do not want to stand still. How long does such a photo session take for you? I wonder how many unsuccessful photos go to the basket for one hit. What kind of diffusers do you use on speedlite? Is all the light from the flash or the background is partially natural (what shutter speed and ISO?)
== best, Pawel :smt038
Thank you. Those are interesting questions. I like that. :)

My approach for this sort of imaging is based on the use of very small apertures. The Laowa 100mm 2X macro lens that I am using goes from f/2.8 to f/22. I get beyond the minimum aperture of f/22 by using a pair of 2X teleconverters. With them in place I get apertures of f/11 to f/90. At the moment I am using f/45 all the time. The effective aperture is smaller than f/45, increasingly so as the magnification increases, as set out in the top post.

These very small apertures give a relatively large depth of field. However, the image quality is severely degraded by the effects of diffraction. Fine detail is erased completely, and larger scale detail is greatly softened. Once fine detail has been lost it cannot be recovered, but with suitable post processing it is possible to make better use of the larger scale but very soft details that remain. The finalised images need to be kept small because there is insufficient detail for large outputs. How large you could go will vary from image to image and will depend on personal taste, but I simply keep my outputs at 1300 pixels high. They are processed for best viewing as a whole, from a normal viewing distance, at that size, i.e. not upsized or downsized, and with no provision for zooming in/pixel peeping. (They are incidentally also prepared for best viewing in subdued light on a calibrated monitor using the sRGB colour space.)

The following illustration shows two real world examples. I have chosen these because they suffer from particularly severe image degradation and this serves to illustrate the issue clearly.

I shoot raw, but there are small JPEGs embedded in the raw files. These are what an out of the camera JPEG would look like. On the right below we see one of these embedded JPEG files that has been resized to 1300 pixels high. We are looking at the central portion of that 1300 pixel high JPEG at 100%. On the left below we see the same area of a 1300 pixel high processed image. (These were captured with an A7ii rather than the A7sii I have just purchased.) As mentioned in the top post, I used DXO PhotoLab, Lightroom and Topaz DeNoise AI to process these images, but my processing workflow changes quite often and the products I use change from time to time.

Image
1867 1 A7ii+2x2+100atF300ish processed vs OOC by gardenersassistant, on Flickr

This is diffusion I am currently using (this changes from time to time too), shown here on an A7ii.

Image
1887 1 by gardenersassistant, on Flickr

Because of the very small apertures I am using I need to throw a lot of light on the scene. In fact, what with the light loss from the diffusion and the rapid falloff in light intensity as working distance increases, I can't put enough light on the subjects to use base ISO. I rarely get as low as ISO 800, and it is most often in the range of 2000 to 4000, sometimes higher at low magnification. I am essentially working in a low light situation. This is where the Sony A7sii came into the equation.

Because of the loss of fine detail I don't need many pixels on the sensor. Also, there is a school of thought which says that post processing will work better, especially at low light levels, with larger pixels. This is disputed, and was the cause of an argument, which I couldn't follow tbh, starting with this post in a thread I posted at dpreview.com asking whether there would be any advantage for what I'm doing in using a camera with fewer but larger pixels. Although I couldn't follow the details, my feeling was that the people arguing for the advantage of larger pixels in my particular context had the best of the argument. So I decided to try the A7sii, which has only 12 megapixels and is renowned for being a good low light camera, at least for video. I don't think it used too much for stills.

I haven't used the a7sii much yet, and I may be in a false honeymoon period where I'm seeing what I want to see in order to justify the cost, but I do have the impression that it is producing results that are at least as good as with the A7ii. And I suspect that it is picking up more from dark backgrounds. I don't like the completely black backgrounds that you sometimes get with flash. But as illustrated in these examples even when background areas are rather dark, they often have a little colour to them, and when they don't they still aren't completely black. I'm liking the look of that. I don't know if this is flash or natural light it is picking up. That probably varies from scene to scene.

It is also possible that the subjects are coming out better. I have the impression (but possibly it is imagination or wishful thinking) that they seem to have a bit better clarity, or something. I can't put my finger on it and it isn't practical to do real world like for like comparisons in the field between the A7ii and the A7sii, so I'm never going to be 100% certain about it.

One thing I am fairly certain about after my first two sessions with the camera is that I'm getting stronger focus peaking signals than with the A7ii, and that is very helpful. I think it is probably increasing my success rate as far as focusing is concerned.

You asked about the length of sessions and success rates. That particular session was just under two hours. I captured 529 shots and kept 93 of them. On the face of it that is a "success rate" of around 17%. However, I think you need to be careful about interpreting "success rates".

For example, I often capture a lot of shots of a subject to increase the probability of getting one that turns out ok. Suppose I take 10 shots of a subject and there turn out to be 6 that I would be prepared to use. But I only use one, because the pose is the same in all of them. Is my success rate 10% (using just one out of 10 attempts) or 60% (I could have used 6 of the 10)? And what would your success rate be for the same 10 images. You are probably looking for different things in an image than I am. Perhaps for you none of them would be usable, or perhaps all of them, or anything in between. It also depends on how difficult the shots are. Shots get more difficult the smaller the subject is, the more awkward it is to get at, the more easily it is disturbed, the more it is moving around, the more what it is on is moving around (e.g. on a leaf in a breeze), the more particular you are about exactly where the plane of focus falls and the more frequently you change the framing/magnification between shots. So if you are shooting large subjects which are stationary for as long as you want to take and are difficult to disturb (like crane flies for example in my experience) then you could reasonably expect a high success rate. Conversely, with a smaller subject that is moving around a lot and you are moving frequently between environmental, full body and head shots you could reasonably expect a quite low success rate.

As it happens, I like photographing little animals as they move around. For example, here are three sequences from the test session. How should the fact I capture sequences like these affect the interpretation of that "17% success rate"? I don't know. :)

Image
1892 Illustration 1 - Three sequences by gardenersassistant, on Flickr

As to shutter speed and ISO, I generally use flash sync speed for flash work. At least, I thought I did. I thought it was 1/200 sec for the A7ii and A7sii, but I just looked it up to check and it turns out to be 1/250 sec, so that's what I will use in future. Thanks for the nudge. :) [EDIT: It turns out that it does need to be 1/200 sec, despite what the specification says. Presumably there is a synchronisation issue between the camera and the non-Sony flash that I'm using.]

ISO varied from 640 to 5000 for the images I kept from the test session. I use a manual flash and I keep it running at 1/4 power so I can keep shooting every one to two seconds for extended periods without having to wait for the flash to recharge. I use ISO to control the lightness of the image.
Last edited by gardenersassistant on Tue Jun 01, 2021 4:50 am, edited 1 time in total.
Nick

Flickr
Blog
Journey since 2007

Rework and reposts of my images posted in this forum are always welcome, especially if they come with an explanation of what you did and how you did it.

MarkSturtevant
Posts: 1946
Joined: Sat Nov 21, 2015 6:52 pm
Location: Michigan, U.S.A.
Contact:

Re: Testing a Sony A7sii for macro

Post by MarkSturtevant »

Phenomenally good images.
The fewer but larger pixel argument is one that I know well enough, but meanwhile camera sensor technology has improved so fast that what seemed impossible with smaller pixels a few years ago seems to be more in reach now. I don't have a dog in that fight in any case since I have to make do with what I have with older cameras.
I must say that your pictures hold up very well when I pixel peep. I'm looking for the images to fail and pixelize when enlarged, but it takes a lot of zooming to get there!

One thing I'm exploring lately for post processing is to merge together pictures taken under different settings through some simple layer masks. So some pictures are exposed for the subject and foreground, and then other pictures, taken at the same position, with settings adjusted to bring up more information from the background. Then these are merged in layer masks. Its also an old trick of course to use the flash to expose the subject, whilst the shutter speed is dragged out, real slow, to see what ambient light might bring you in the same frame.
Mark Sturtevant
Dept. of Still Waters

gardenersassistant
Posts: 190
Joined: Sun May 31, 2009 5:21 am
Location: North Somerset, England

Re: Testing a Sony A7sii for macro

Post by gardenersassistant »

MarkSturtevant wrote:
Tue May 18, 2021 3:34 pm
Phenomenally good images.
Thanks.
MarkSturtevant wrote:
Tue May 18, 2021 3:34 pm
The fewer but larger pixel argument is one that I know well enough, but meanwhile camera sensor technology has improved so fast that what seemed impossible with smaller pixels a few years ago seems to be more in reach now. I don't have a dog in that fight in any case since I have to make do with what I have with older cameras.
I must say that your pictures hold up very well when I pixel peep. I'm looking for the images to fail and pixelize when enlarged, but it takes a lot of zooming to get there!
Hmmmm. OK. But please note, from my previous response, "I simply keep my outputs at 1300 pixels high. They are processed for best viewing as a whole, from a normal viewing distance, at that size, i.e. not upsized or downsized, and with no provision for zooming in/pixel peeping." If you are going in closer than 1300 pixels high what you are seeing is simply an upsized image, with any pixellation (or lack thereof) arising from the resizing algorithm and the extent of the upsizing.
MarkSturtevant wrote:
Tue May 18, 2021 3:34 pm
One thing I'm exploring lately for post processing is to merge together pictures taken under different settings through some simple layer masks. So some pictures are exposed for the subject and foreground, and then other pictures, taken at the same position, with settings adjusted to bring up more information from the background. Then these are merged in layer masks.
That is an interesting approach. It wouldn't work for what I do, but I can see the attraction of it for scenes and capture techniques where it would work. The processing sounds like it would take longer than I like to spend on a single image. (I have a somewhat "industrial", fairly high throughput approach, so as much as possible is automated/routinised, and for the aspects that do need my attention, I don't use any techniques that can't be applied quickly.)
MarkSturtevant wrote:
Tue May 18, 2021 3:34 pm
Its also an old trick of course to use the flash to expose the subject, whilst the shutter speed is dragged out, real slow, to see what ambient light might bring you in the same frame.
Yes. There is a risk of ghosting of the subject though. I rarely do it, and only when the subject has been around long enough for me to get some normal shutter speed shots. That way if there is ghosting I can just use the normal shots. Going straight for shutter dragging risks getting no usable shots at all because of ghosting.

One of the interesting things with the A7sii is that it seems that, without shutter dragging, it is picking up at least a little from dark backgrounds, enough to stop them going completely black. Time will tell whether that will generally be the case, or whether there was something special about the scenes in that session, such as no backgrounds being very far from the subject.
Nick

Flickr
Blog
Journey since 2007

Rework and reposts of my images posted in this forum are always welcome, especially if they come with an explanation of what you did and how you did it.

rjlittlefield
Site Admin
Posts: 23562
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Re: Testing a Sony A7sii for macro

Post by rjlittlefield »

Nick,

Those are very pleasant images, and that's a great illustration of the images pre- and post-processing! It's a beautiful example of what I've tried to explain using MTF curves elsewhere, like HERE.
gardenersassistant wrote:
Tue May 18, 2021 6:38 am
there is a school of thought which says that post processing will work better, especially at low light levels, with larger pixels. This is disputed, and was the cause of an argument, which I couldn't follow tbh, starting with this post in a thread I posted at dpreview.com asking whether there would be any advantage for what I'm doing in using a camera with fewer but larger pixels. Although I couldn't follow the details, my feeling was that the people arguing for the advantage of larger pixels in my particular context had the best of the argument. So I decided to try the A7sii, which has only 12 megapixels and is renowned for being a good low light camera, at least for video. I don't think it used too much for stills.
I get some amusement from reading debates about pixel size, so I took a pass through the one starting at the link you gave. It seems like typical fare -- lots of words without much insight.

The key insights are these:
  • Image noise is dominated by the random arrival of photons, called "shot noise". For N photons, the noise is proportional to sqrt(N), so larger photon counts give relatively less noise in each count. However...
  • Barring differences in capture efficiency, the total number of photons captured for each area of the subject depends only on the optics, the light, and the exposure time, and not on the pixel size. Smaller pixels gather fewer photons per pixel, but that is exactly compensated by the larger number of pixels covering the same area on subject. So, smaller pixels give more noise per pixel, but not per subject area.
  • When somebody says that large pixels have much less noise than small pixels, it's probably because they're looking at the noise per pixel and not the noise per subject area that they really should be looking at.
  • Smaller pixels likely do have slightly lower capture efficiency, and in addition there are other small sources of noise that do not depend on photon counts. These factors do cause smaller pixels to produce images that have slightly more noise per subject area, but it's not a big effect.
In your particular case, the optical image that you're sampling really only contains about 3 megapixels of information. So, starting from an original 12 megapixels you're not introducing any very bad effects in the downsampling, and you're still gaining whatever (small) benefits there are from having larger pixels in the first place. If your image was substantially sharper to start with, and you wanted more pixels in the final image to capture that, then the tradeoff would go the other way. Horses for courses...

--Rik

gardenersassistant
Posts: 190
Joined: Sun May 31, 2009 5:21 am
Location: North Somerset, England

Re: Testing a Sony A7sii for macro

Post by gardenersassistant »

rjlittlefield wrote:
Tue May 18, 2021 8:34 pm
Nick,

Those are very pleasant images, and that's a great illustration of the images pre- and post-processing! It's a beautiful example of what I've tried to explain using MTF curves elsewhere, like HERE.

Thanks Rik. That is a very illuminating post. I've never really had a firm grasp of what MTF actually is, so when I saw it was central to your post I went off to try and get a better grasp of it. In the event I only read one article, this one from Edmund Optics which came out top in a Google search. Having digested that I felt I got good mileage from your post.
rjlittlefield wrote:
Tue May 18, 2021 8:34 pm
gardenersassistant wrote:
Tue May 18, 2021 6:38 am
there is a school of thought which says that post processing will work better, especially at low light levels, with larger pixels. This is disputed, and was the cause of an argument, which I couldn't follow tbh, starting with this post in a thread I posted at dpreview.com asking whether there would be any advantage for what I'm doing in using a camera with fewer but larger pixels. Although I couldn't follow the details, my feeling was that the people arguing for the advantage of larger pixels in my particular context had the best of the argument. So I decided to try the A7sii, which has only 12 megapixels and is renowned for being a good low light camera, at least for video. I don't think it used too much for stills.
I get some amusement from reading debates about pixel size, so I took a pass through the one starting at the link you gave. It seems like typical fare -- lots of words without much insight.

The key insights are these:
  • Image noise is dominated by the random arrival of photons, called "shot noise". For N photons, the noise is proportional to sqrt(N), so larger photon counts give relatively less noise in each count. However...
  • Barring differences in capture efficiency, the total number of photons captured for each area of the subject depends only on the optics, the light, and the exposure time, and not on the pixel size. Smaller pixels gather fewer photons per pixel, but that is exactly compensated by the larger number of pixels covering the same area on subject. So, smaller pixels give more noise per pixel, but not per subject area.
  • When somebody says that large pixels have much less noise than small pixels, it's probably because they're looking at the noise per pixel and not the noise per subject area that they really should be looking at.
  • Smaller pixels likely do have slightly lower capture efficiency, and in addition there are other small sources of noise that do not depend on photon counts. These factors do cause smaller pixels to produce images that have slightly more noise per subject area, but it's not a big effect.
In your particular case, the optical image that you're sampling really only contains about 3 megapixels of information. So, starting from an original 12 megapixels you're not introducing any very bad effects in the downsampling, and you're still gaining whatever (small) benefits there are from having larger pixels in the first place. If your image was substantially sharper to start with, and you wanted more pixels in the final image to capture that, then the tradeoff would go the other way. Horses for courses...

--Rik
As you will gather I don't understand the underlying theory of any of this; I do it all from a practical angle, with experiments, trial and error, hunches, guesswork, heuristics etc. So please forgive me if I've misunderstood this, but as I understand it you are referring to whole-image characteristics. I had been convinced by earlier discussions that at the whole-image level the size of the pixels was unlikely to make much difference (for the same generation of sensor etc).

The (for me) key argument at dpreview had to do with the accumulation of errors during the course of sequences of floating point operations during post processing. The assertion was that the calculations during post processing are done at the pixel level and if you start with a (terminology?) "less noisy" captured pixel value then you may end up with a "better" (nearer to the ideal produced by error-free operations) pixel value at the end of a chain of operations.

If true, this seemed potentially significant for my particular use case, because I operate in what is effectively a low light situation and apply strong post processing involving a number of processing functions. (My experience has led me to using the combination of a number of relatively modest adjustments, drawn from separate applications in the pipeline, trying to pick the best (in the overall context) application for each type of adjustment, rather than using any functions at high strength and/or using a single processing application.)

That is what led me to try the A7sii. I can't prove it, because sufficiently like for like real world comparison examples seem impractical to me, but given the results so far (compared in a rough and ready, intuitive sort of a way to what I've been getting with an A7ii) I have a feeling there may be something in the calculation error hypothesis. In any case, I'm comfortable with continuing to use the A7sii in place of the A7ii for this type of imaging.
Nick

Flickr
Blog
Journey since 2007

Rework and reposts of my images posted in this forum are always welcome, especially if they come with an explanation of what you did and how you did it.

Dalantech
Posts: 694
Joined: Sun Aug 03, 2008 6:57 am

Re: Testing a Sony A7sii for macro

Post by Dalantech »

rjlittlefield wrote:
Tue May 18, 2021 8:34 pm
  • Image noise is dominated by the random arrival of photons, called "shot noise". For N photons, the noise is proportional to sqrt(N), so larger photon counts give relatively less noise in each count. However...
  • Barring differences in capture efficiency, the total number of photons captured for each area of the subject depends only on the optics, the light, and the exposure time, and not on the pixel size. Smaller pixels gather fewer photons per pixel, but that is exactly compensated by the larger number of pixels covering the same area on subject. So, smaller pixels give more noise per pixel, but not per subject area.

    --Rik
There's another issue that I'd like you're .02 on, and one that I think is overlooked. On the hardware level is seems to me that larger pixels should be able to gather more pixels, or to say gather them "faster", than a smaller pixel simply due to the increase in surface area of a larger pixel. Also since more pixels can be gathered during a given exposure time the signal created by them doesn't have to be amplified as much, therefore reducing image noise.

I've seen several discussions about digital sensors where the overall light gathering ability of a sensor was due to just the surface area of it, with full frame (or larger) sensors supposedly being better at gathering light. Logically it doesn't make sense to me to me to view the overall surface area of a digital sensor when discussing a sensor's ability to capture light, since a camera's sensor does not behave like a solar cell. It makes more sense to me to only take into account the per pixel light gathering ability of a sensor's pixels. Of course I could be completely out to lunch, and it wouldn't be the first time. But my background is in electronics (network engineering pays the rent) and based on what I know about digital sensors thinking of them as a single light gathering device doesn't make sense to me, but viewing them as millions of light sensitive cells (pixels) does. More pixels equals more resolution (barring diffraction) but I don't see how it equals more light gathering. What am I missing?

rjlittlefield
Site Admin
Posts: 23562
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Re: Testing a Sony A7sii for macro

Post by rjlittlefield »

gardenersassistant wrote:
Wed May 19, 2021 3:16 am
In the event I only read one article, this one from Edmund Optics which came out top in a Google search.
Google served you well. That's an excellent article. (Most of Edmund's are.)
So please forgive me if I've misunderstood this, but as I understand it you are referring to whole-image characteristics. I had been convinced by earlier discussions that at the whole-image level the size of the pixels was unlikely to make much difference (for the same generation of sensor etc).
I have no idea what you mean by "whole-image characteristics". Noise is always measured as uncertainty or variability across some small area of the image. My point was that it's important to talk about the same small area if you want to make meaningful comparisons. If you compare the noise in one pixel of a 12MP sensor, with the noise in one pixel of a 48MP sensor with the same frame size, then you'll find that the smaller pixels will have about 2X more noise because each pixel counted 4X fewer photons. However, when aggregated to pixels of the same size for presentation, say your 1946x1300 size, the total photon counts and noise levels will (ideally) be the same from both sensors.
The (for me) key argument at dpreview had to do with the accumulation of errors during the course of sequences of floating point operations during post processing. The assertion was that the calculations during post processing are done at the pixel level and if you start with a (terminology?) "less noisy" captured pixel value then you may end up with a "better" (nearer to the ideal produced by error-free operations) pixel value at the end of a chain of operations.
Let me put that argument in different words. There are some sequences of operations that play nicely with noise, such as aggregation by summing raw data. There are other sequences of operations that do not play nicely with noise; I won't distract us now with examples of those. The analysis that says pixel size does not matter in the end implicitly assumes that only play-nice operations will be done. But in the universe of post-processing tools there are likely to be at least some operations you'd like to use that do not play nicely with noise. That being the case, it's arguably more robust to start with numbers that have as little noise as possible, instead of casually believing that the larger noise with smaller pixels will disappear with aggregation.

By the way, this issue has very little to do with floating point operations. It has a lot more to do with non-linearity like when operating with gamma=2 pixel values without taking the gamma into account. For your amusement sometime, play with the gamma resizing bug discussed at http://www.ericbrasseur.org/gamma.html?i=1 , in which all image content may mysteriously disappear when you simply reduce an image to half size. I just now checked, and sure enough Photoshop still screws this up. (GIMP gets it right.) So, if your favorite tools happen to not all be gamma-aware, then I would not be surprised if noise did not reduce as nicely with aggregation as the simple model assumes.

--Rik

rjlittlefield
Site Admin
Posts: 23562
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Re: Testing a Sony A7sii for macro

Post by rjlittlefield »

Dalantech wrote:
Wed May 19, 2021 1:50 pm
There's another issue that I'd like you're .02 on, and one that I think is overlooked. On the hardware level is seems to me that larger pixels should be able to gather more pixels, or to say gather them "faster", than a smaller pixel simply due to the increase in surface area of a larger pixel. Also since more pixels can be gathered during a given exposure time the signal created by them doesn't have to be amplified as much, therefore reducing image noise.
I assume those things that were gathered should have been "photons", not "pixels". With that correction, then yes, sort of.

The total noise is shot noise plus everything else. With modern sensors, shot noise handily dominates everything else except in very dark regions. So the main issue does not have to do with how much a signal gets amplified, it's just that the original signal may not reflect the true average light level because photons arrive randomly. If on average there are 10,000 photons captured per pixel, then the actual numbers will be roughly Gaussian distribution with mean 10,000 and standard deviation 100. In other words, that might be described as a signal-to-noise ratio of 100:1, or 1% noise.

I've seen several discussions about digital sensors where the overall light gathering ability of a sensor was due to just the surface area of it, with full frame (or larger) sensors supposedly being better at gathering light. Logically it doesn't make sense to me to me to view the overall surface area of a digital sensor when discussing a sensor's ability to capture light, since a camera's sensor does not behave like a solar cell. It makes more sense to me to only take into account the per pixel light gathering ability of a sensor's pixels. Of course I could be completely out to lunch, and it wouldn't be the first time. But my background is in electronics (network engineering pays the rent) and based on what I know about digital sensors thinking of them as a single light gathering device doesn't make sense to me, but viewing them as millions of light sensitive cells (pixels) does. More pixels equals more resolution (barring diffraction) but I don't see how it equals more light gathering. What am I missing?
I can't tell whether you're missing anything, so let me just offer an essay and maybe from that you can spot any disconnect.

Large sensors have the capacity to store more electrons, so large sensors can count more photons. Given the same pixel counts, and filled to capacity, the larger sensor will give less pixel noise, in proportion to its linear dimension (that is, the square root of its area). So, if you fill to capacity a full-frame sensor, and a Micro Four-Thirds sensor at half the edge length and the same pixel count, then you'll find that the full frame pixels are half as noisy.

But note that I've emphasized "filled to capacity". The only way that you can fill the larger sensor to capacity is to either use brighter light, or a longer exposure, or a wider entrance cone. There is a way of thinking called "equivalent image analysis" that starts by assuming that you don't do any of those things. That is, with both sensors you agree to use the same brightness, the same exposure time, the same framing, and the same entrance cone. The only thing that you're allowed to change is the sensor size and optics as needed to keep everything else the same. Under these circumstances, the images that you get will be "equivalent" in the sense that if they're displayed at the same final size, then they'll have the same DOF, the same diffraction blur, and the same motion blur. In other words, the two images will look the same. If, in addition, both final images have the same pixel counts, then lo and behold, those pixels also have the same amount of shot noise, despite the differences in physical sizes of the sensors. The reason for this is very simple: both sensors are counting the same numbers of photons. I summarize it as "same light, same image".

So the moral of that part is that if you want to take advantage of the potential higher quality of a full frame sensor, you have to do something different that lets you capture more light.

This adds an interesting wrinkle to Nick's situation. Note that ISO 800 and f/45 on a full frame sensor is equivalent to ISO 200 and f/22 on an MFT sensor. (The ratio of aperture sizes makes the Airy disk scale in proportion to the sensor size, and the ISO setting tracks the aperture size at same light intensity and exposure time.) So, in principle, Nick could replace that 12 megapixel full frame camera with a 12 megapixel MFT camera, remove one of his 2X teleconverters, and get the same images in a smaller and lighter package.

Many people find this sort of equivalence to be quite counterintuive, in part because it involves setting their cameras to unusual values. Going from full frame down to MFT, the changes may not seem so odd. But when I tell people that a full frame camera can grab the same image at f/45 and ISO 800 that an MFT does at f/22 and ISO 200, their first reaction is usually to think that's either crazy or just plain wrong.

So, does anything in there help?

--Rik

Dalantech
Posts: 694
Joined: Sun Aug 03, 2008 6:57 am

Re: Testing a Sony A7sii for macro

Post by Dalantech »

rjlittlefield wrote:
Thu May 20, 2021 7:42 pm
Dalantech wrote:
Wed May 19, 2021 1:50 pm
There's another issue that I'd like you're .02 on, and one that I think is overlooked. On the hardware level is seems to me that larger pixels should be able to gather more pixels, or to say gather them "faster", than a smaller pixel simply due to the increase in surface area of a larger pixel. Also since more pixels can be gathered during a given exposure time the signal created by them doesn't have to be amplified as much, therefore reducing image noise.
I assume those things that were gathered should have been "photons", not "pixels". With that correction, then yes, sort of.
Yup, that's what I meant.
rjlittlefield wrote:
Thu May 20, 2021 7:42 pm

But note that I've emphasized "filled to capacity". The only way that you can fill the larger sensor to capacity is to either use brighter light, or a longer exposure, or a wider entrance cone. There is a way of thinking called "equivalent image analysis" that starts by assuming that you don't do any of those things. That is, with both sensors you agree to use the same brightness, the same exposure time, the same framing, and the same entrance cone. The only thing that you're allowed to change is the sensor size and optics as needed to keep everything else the same.
The part I've bolded is the part that I don't agree with in that analysis. A better "apples to apples" comparison would be to use the exact same lens and then crop the full frame image to the same size (field of view) as the crop factor sensor. A crop factor sensor does not change the focal length of a "full frame" lens -it only changes the field of view due to the image circle being cropped by the crop factor sensor. There is no functional difference between cropping a full frame image in post and using a crop factor sensor. There's also no difference in the light projected onto the image plane, just a difference in how much of that circle is being captured.
rjlittlefield wrote:
Thu May 20, 2021 7:42 pm

This adds an interesting wrinkle to Nick's situation. Note that ISO 800 and f/45 on a full frame sensor is equivalent to ISO 200 and f/22 on an MFT sensor. (The ratio of aperture sizes makes the Airy disk scale in proportion to the sensor size, and the ISO setting tracks the aperture size at same light intensity and exposure time.) So, in principle, Nick could replace that 12 megapixel full frame camera with a 12 megapixel MFT camera, remove one of his 2X teleconverters, and get the same images in a smaller and lighter package.

--Rik
I get how the airy disk will have a different effect on different sensors due to pixel density and size, but I don't understand how that corresponds to a difference in ISO and aperture. Unless you're saying that in order to keep diffraction the same you have to change the ISO and Fstop for the crop factor sensor. The fact that the sensor is smaller than full frame doesn't inherently change those values -if that were true then ISO and Fstop would change if you cropped a full frame image in post...

ISO is a tough one to gauge between sensors. If it is a set standard that can be measured and calibrated across different sensors then the a sensor with smaller pixels will create per-pixel signals that have to be amplified more than the per-pixel signals created by larger pixels. But what's stopping a manufacturer from making their crop factor sensors less noisy by setting the amplification to a lower than standard value?

As with all crop factor to full frame comparisons there has been a serious effort to make crop factor sensors seems as if they can defy physics just because they're smaller than full frame. But, like I said earlier, there is no functional difference in shooting full frame and cropping images in post down to the same field of view as a crop factor sensor. Shoot with a full frame camera like Canon's 5Ds (51MP) and crop it down to a 1.6x field of view (roughly 18MP) and the pixel density will be exactly the same. Mask off that full frame senor with tape to a 1.6x crop and the images will look just like the cropped in post full frame shots. The only difference is in how the tests between full frame and crop factor sensors are rigged to prove the tester's point of view...

gardenersassistant
Posts: 190
Joined: Sun May 31, 2009 5:21 am
Location: North Somerset, England

Re: Testing a Sony A7sii for macro

Post by gardenersassistant »

rjlittlefield wrote:
Thu May 20, 2021 6:52 pm
gardenersassistant wrote:
Wed May 19, 2021 3:16 am
So please forgive me if I've misunderstood this, but as I understand it you are referring to whole-image characteristics. I had been convinced by earlier discussions that at the whole-image level the size of the pixels was unlikely to make much difference (for the same generation of sensor etc).
I have no idea what you mean by "whole-image characteristics". Noise is always measured as uncertainty or variability across some small area of the image. My point was that it's important to talk about the same small area if you want to make meaningful comparisons. If you compare the noise in one pixel of a 12MP sensor, with the noise in one pixel of a 48MP sensor with the same frame size, then you'll find that the smaller pixels will have about 2X more noise because each pixel counted 4X fewer photons. However, when aggregated to pixels of the same size for presentation, say your 1946x1300 size, the total photon counts and noise levels will (ideally) be the same from both sensors.
Yes, that was what I was getting at.
rjlittlefield wrote:
Thu May 20, 2021 6:52 pm
The (for me) key argument at dpreview had to do with the accumulation of errors during the course of sequences of floating point operations during post processing. The assertion was that the calculations during post processing are done at the pixel level and if you start with a (terminology?) "less noisy" captured pixel value then you may end up with a "better" (nearer to the ideal produced by error-free operations) pixel value at the end of a chain of operations.
Let me put that argument in different words. There are some sequences of operations that play nicely with noise, such as aggregation by summing raw data. There are other sequences of operations that do not play nicely with noise; I won't distract us now with examples of those. The analysis that says pixel size does not matter in the end implicitly assumes that only play-nice operations will be done. But in the universe of post-processing tools there are likely to be at least some operations you'd like to use that do not play nicely with noise. That being the case, it's arguably more robust to start with numbers that have as little noise as possible, instead of casually believing that the larger noise with smaller pixels will disappear with aggregation.

By the way, this issue has very little to do with floating point operations. It has a lot more to do with non-linearity like when operating with gamma=2 pixel values without taking the gamma into account.
That is very interesting and informative. Thank you.

btw I have looked at how each stage in the processing affects the look of the images for those two examples. I omitted image-specific adjustments, which could confuse the issue. I just used my (current) standard presets (exactly as I do for a first run through all the images from a session, to get them into a state where I can see enough of their potential to be able to produce an initial longlist).

Here for the first image is Original (raw file embedded JPEG), After PhotoLab, After Lightroom.

Image
1894 Illustration 1 - Example 1 - Original, PhotoLab, Lightroom by gardenersassistant, on Flickr

And then, at the 1300 high pixels final size, After Lightroom (as above, but as it is actually used, downsized to 1300 pixel high TIFF), After DeNoise AI, After AI Clear.

Image
1894 Illustration 2 - Example 1 - Lightroom, DeNoise AI, AI Clear by gardenersassistant, on Flickr

Direct comparison, Original to After AI Clear.

Image
1894 Illustration 5 - Example 1 - original, final by gardenersassistant, on Flickr

And the same for the second image.

Image
1894 Illustration 3 - Example 2 - Original, PhotoLab, Lightroom by gardenersassistant, on Flickr

Image
1894 Illustration 4 - Example 2 - Lightroom, DeNoise AI, AI Clear by gardenersassistant, on Flickr

Image
1894 Illustration 6 - Example 2 - original, final by gardenersassistant, on Flickr
Nick

Flickr
Blog
Journey since 2007

Rework and reposts of my images posted in this forum are always welcome, especially if they come with an explanation of what you did and how you did it.

gardenersassistant
Posts: 190
Joined: Sun May 31, 2009 5:21 am
Location: North Somerset, England

Re: Testing a Sony A7sii for macro

Post by gardenersassistant »

rjlittlefield wrote:
Thu May 20, 2021 7:42 pm
So the moral of that part is that if you want to take advantage of the potential higher quality of a full frame sensor, you have to do something different that lets you capture more light.

This adds an interesting wrinkle to Nick's situation. Note that ISO 800 and f/45 on a full frame sensor is equivalent to ISO 200 and f/22 on an MFT sensor. (The ratio of aperture sizes makes the Airy disk scale in proportion to the sensor size, and the ISO setting tracks the aperture size at same light intensity and exposure time.) So, in principle, Nick could replace that 12 megapixel full frame camera with a 12 megapixel MFT camera, remove one of his 2X teleconverters, and get the same images in a smaller and lighter package.
Yes indeed, in principle. I have tried the same approach with (amongst other combinations) MFT and one 2X TC and APS-C with a 2X and a 1.4X TC.

I was using the same macro lens and the setups were of a similar size and weight. (My MFT Panasonic G9 is larger and heavier than either the A7ii or A7sii, and my APS-C Canon 70D is heavier than any of them.)

I use the full frame because it handles better. I am using manual focus: my 70D doesn't have focus peaking. My G9 does have focus peaking, but in this setup it rarely produces any signal. The A7ii provided a modest focus peaking signal, some of the time. The A7sii provides a good focus peaking signal a lot of the time. A usable focus peaking signal makes a huge difference, for me.

Also, I'm not a great fan of tilting LCDs (I almost always use the rear screen rather than a viewfinder). Most of my cameras have fully articulated LCDs. However, for this application a tilting screen is actually better. At higher magnifications getting the subject framed can be problematic, time-consuming and frustrating. Even if the camera is pointing directly at the subject you may not see even a hint of it until you get the distance to the subject right, and with a big lens and a subject that may be as small as 1mm long it can be difficult to tell if you are pointing directly at it, especially if the local environment is complicated distance-wise. The tilting screen, being aligned with the lens barrel makes it easier to get the direction right compared to an offset articulated screen. (The big disadvantage of tilting screens, working in portrait aspect ratio, doesn't apply in this use case as I am always working in landscape aspect ratio.)
rjlittlefield wrote:
Thu May 20, 2021 7:42 pm
Many people find this sort of equivalence to be quite counterintuive, in part because it involves setting their cameras to unusual values. Going from full frame down to MFT, the changes may not seem so odd. But when I tell people that a full frame camera can grab the same image at f/45 and ISO 800 that an MFT does at f/22 and ISO 200, their first reaction is usually to think that's either crazy or just plain wrong.
And also, f/8 on 1/2.3". One of the things I do occasionally (happy to do it here if anyone is interested) is to present images from four different sensor sizes (1/2.3", MFT, APS-C and FF) which were captured using equivalent f-numbers, with EXIF data removed, and invite viewers to identify which images came from which sensor sizes. They are real world images captured in the field, so they are obviously not like for like, and I present them at my usual 1300 pixels high which can mask differences, but people often express surprise that they don't have any idea which is which.
Nick

Flickr
Blog
Journey since 2007

Rework and reposts of my images posted in this forum are always welcome, especially if they come with an explanation of what you did and how you did it.

rjlittlefield
Site Admin
Posts: 23562
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Re: Testing a Sony A7sii for macro

Post by rjlittlefield »

gardenersassistant wrote:
Fri May 21, 2021 1:20 am
Direct comparison, Original to After AI Clear.
These are very impressive transformations!


Interestingly, just today I was reading in one of my IEEE publications about some applications of "artificial intelligence". Quoting one snippet of the article:
DOI: 10.1109/MITP.2020.2985492 wrote:GANs can also be used to create superresolution imagery from low resolution inputs. Though the creation of high-resolution imagery from lower resolution input is not new, the technology can still struggle to remove noise and compression artifacts. GANs can optimize this process by creating a higher quality image than one that ever existed -- "fantasizing" details onto the low resolution image.
I doubt there's anything as powerful as a Generative Adversarial Network in your chain of tools.

But I'm curious, have you ever shot the same subject using your highly post-processed single-shot workflow and using a high resolution stacked workflow, and studied them to see how the details compare?

One of the things I do occasionally (happy to do it here if anyone is interested) is to present images from four different sensor sizes (1/2.3", MFT, APS-C and FF) which were captured using equivalent f-numbers, with EXIF data removed, and invite viewers to identify which images came from which sensor sizes. They are real world images captured in the field, so they are obviously not like for like, and I present them at my usual 1300 pixels high which can mask differences, but people often express surprise that they don't have any idea which is which.
I'm not surprised that people are surprised. I am also not surprised that they can't tell the difference, since that is exactly what properly applied theory predicts.

--Rik

rjlittlefield
Site Admin
Posts: 23562
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Re: Testing a Sony A7sii for macro

Post by rjlittlefield »

Dalantech wrote:
Fri May 21, 2021 1:09 am
rjlittlefield wrote:
Thu May 20, 2021 7:42 pm

But note that I've emphasized "filled to capacity". The only way that you can fill the larger sensor to capacity is to either use brighter light, or a longer exposure, or a wider entrance cone. There is a way of thinking called "equivalent image analysis" that starts by assuming that you don't do any of those things. That is, with both sensors you agree to use the same brightness, the same exposure time, the same framing, and the same entrance cone. The only thing that you're allowed to change is the sensor size and optics as needed to keep everything else the same.
The part I've bolded is the part that I don't agree with in that analysis. A better "apples to apples" comparison would be to use the exact same lens and then crop the full frame image to the same size (field of view) as the crop factor sensor.
In what sense is this a better comparison?

It seems to me that what you've described is like taking a four-burner cookstove, masking off three burners, and then observing that the result is no better than a one burner stove.

If you want to get the benefits of four burners, you have to actually use four burners. That also requires having more pans and maybe scaling up the size of the recipe.

Arguments about sensor size are like that too. If you want to get the best result from any sensor, you have to use that sensor appropriately.

On the other hand, maybe you're arguing that there's no advantage to a one-burner stove because you can always use just one burner on a four-burner stove. If that's the point, then I totally agree -- cropping a full frame sensor gives you something that acts just like a crop sensor.
I get how the airy disk will have a different effect on different sensors due to pixel density and size, but I don't understand how that corresponds to a difference in ISO and aperture.
This is all gone over, in rather gory detail, in the discussion from 13 years ago that I linked earlier.

However, for the sake of discussion here, I'll try providing a quick synopsis.

The essence of equivalent images analysis is to look at what happens when you set out to capture the same image with different size sensors. "Same image" means same lighting, same field of view, same perspective, same DOF, same exposure time. All those "sames", taken together, imply that the aperture diameter also stays the same, and that images formed on sensor will vary only in size and corresponding light intensity. If one sensor has 2X the dimensions of another, then the image on the larger sensor must be 2X larger on each axis, which makes it 1/4 as bright because you're spreading the same light over 4X larger area. Because the image is 2X larger it is also 2X farther distance from the lens, and since the aperture diameter doesn't change, the effective f-number = distance/diameter is also 2X larger. Finally, because the image is 1/4 as bright, but the exposure time is fixed, the ISO is 4X larger.

In the words of another fellow who finally caught on, all of this is "just similar triangles".
ISO is a tough one to gauge between sensors. If it is a set standard
ISO certainly is a set standard, or more precisely it is a set of 5 standards depending on exactly which method a manufacturer wants to use. See https://en.wikipedia.org/wiki/Film_spee ... 9_standard for discussion. All 5 methods are based on the same concept: ISO speed is determined by the intensity of light on sensor that is needed to provide a well exposed image in a fixed exposure time, or equivalently, the exposure time needed to produce a well exposed image at a fixed intensity of light on sensor. The old "sunny 16" rule for film still applies to digital: at an ISO value of N, use 1/N seconds at f/16 on a sunny day to get a well exposed image.
a sensor with smaller pixels will create per-pixel signals that have to be amplified more than the per-pixel signals created by larger pixels
A counterpoint: smaller pixels accumulate less charge, but they also have smaller capacitance. My guess is that small and large pixels have quite similar voltages. Not that it matters for discussion here, because...
what's stopping a manufacturer from making their crop factor sensors less noisy by setting the amplification to a lower than standard value?
The ISO standard doesn't say anything about amplification values per se. It's an end-to-end standard that relates light intensity and exposure time, for a well exposed image. Whatever leeway there is in the standards, comes in defining what "well exposed" means.

Manufacturers are completely free to play around with amplification, within limits. The key limit occurs when the brightest pixels capture so much light that they can't hold any more charge and clamp at "white". Manufacturers use this regime to set their "base ISO". If a sunny day at f/16 reaches the limit in 1/N seconds, then they have a base ISO value someplace around N.
As with all crop factor to full frame comparisons there has been a serious effort to make crop factor sensors seems as if they can defy physics just because they're smaller than full frame.
I'm curious where you've seen such efforts. Around PMN, my impression is that physics-defying claims don't last very long before being squashed. But maybe I'm wrong about that. Can you point to some claims that defy physics?

--Rik

Post Reply Previous topicNext topic