Yes. I'm surprised at what is possible. Some years ago I concluded (from a single example, bad move) that f/128 was unusable and so I didn't go down that path. It is only recently, last summer, that out of desperation trying to get some half decent images of some very small (and hyperactive) flies that I tried tiny apertures again, even though I knew it wouldn't work. Well, it turned out I knew wrong. I got this image in particular, which was far better than anything I had achieved before, and off I went down this rabbit hole.
1645 16 2020_06_01-05 1642 13 2020_06_02 DSC02305_PLab3 SP9LR 1300h-DNAIc-DNAI-PS-AISh by gardenersassistant, on Flickr
I think the big difference between that exercise and the earlier single-shot exercise was developments in post processing products coupled with a lot of post processing trial and error.
I doubt it too. And really, I hope not. As far as I can tell it doesn't seem to invent new details, like for example I think Gigapixel AI does when resizing (and can sometimes make a real mess of of it). However, what I think my workflow does tend to do is to overcook (? terminology ->) edge contrasts on plant surfaces. An example of this, to my eye, is the in-focus area around the subject in the first example, which looks too "busy" to me.rjlittlefield wrote: ↑Fri May 21, 2021 6:35 pmInterestingly, just today I was reading in one of my IEEE publications about some applications of "artificial intelligence". Quoting one snippet of the article:I doubt there's anything as powerful as a Generative Adversarial Network in your chain of tools.DOI: 10.1109/MITP.2020.2985492 wrote:GANs can also be used to create superresolution imagery from low resolution inputs. Though the creation of high-resolution imagery from lower resolution input is not new, the technology can still struggle to remove noise and compression artifacts. GANs can optimize this process by creating a higher quality image than one that ever existed -- "fantasizing" details onto the low resolution image.
Because of this effect I use a more moderate output sharpening approach for botanical subjects, for which I use more normal apertures anyway, and much lower magnification.
However, where the subject is an invertebrate, which needs stronger sharpening to look how I like it to look, there can be a conflict with what happens to the in-focus non-subject areas. When I originally processed that image I noticed the effect and would have done better to use some masking to tone down the non-subject in-focus areas (which can conveniently be done in DeNoise AI and, the main culprit I think, AI Clear). For whatever reason I didn't. And for this exercise it was not appropriate. I've been waiting for a while for someone to pick up on this effect, but so far no one has. People are too polite to mention it perhaps.
No. I use focus stacking as one of my two main methods for botanical subjects, but I've not managed to get it to work to my satisfaction for invertebrates. There are practical issues, out in the field, which is where all my photography is done, and there is an aspect of invertebrate stacks for my type of (typically full body) framing that I don't much like - an unnaturally sudden transition between in-focus and out of focus areas of the background.rjlittlefield wrote: ↑Fri May 21, 2021 6:35 pmBut I'm curious, have you ever shot the same subject using your highly post-processed single-shot workflow and using a high resolution stacked workflow, and studied them to see how the details compare?
As to practicality, my subjects tend often to be in motion, or on foliage or a web that is moving in a breeze, or engaged in an activity that involves movement such as grooming, wrapping prey or blowing bubbles. I like shooting sequences of subject movement through the environment and subject activity, and with and without movement and activity I like to zoom in and out on the subject. And my subjects may turn up suddenly and disappear quickly.
All in all, focus stacking for invertebrates isn't a good fit for me and I've never put the effort into getting to grips with it. I don't know that I'm even capable of doing a good controlled environment comparison between tiny aperture single shot and focus stacked sweet spot aperture.
So, a single-shot approach is a better fit for me, hence my excursion into the land of tiny apertures to try to make the best I can (as someone who prefers to have a lot of the subject in focus) of a single-shot approach.
Exactly so. With another factor thrown in. All my invertebrate images, and so all of the images I use in these exercises, use small (in this case around effective f/45 full frame equivalent) apertures, and dominant diffraction is a great leveller. That said, I have got similarly (almost) indistinguishable results as between MFT and full frame using ordinary apertures for botanical comparisons which were as nearly like for like as I could make them and which I examined rather carefully. (I'll look up links to the two write-ups for that exercise if anyone is interested.)rjlittlefield wrote: ↑Fri May 21, 2021 6:35 pm
I'm not surprised that people are surprised. I am also not surprised that they can't tell the difference, since that is exactly what properly applied theory predicts.
--Rik