Can deconvolution counter diffraction softening?

A forum to ask questions, post setups, and generally discuss anything having to do with photomacrography and photomicroscopy.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

TheLostVertex
Posts: 318
Joined: Thu Sep 22, 2011 9:55 am
Location: Florida

Post by TheLostVertex »

Pardon in advanced for the image heavy post.

Several weeks ago I was looking at deconvolution for heavily diffraction limited images. So I have some test samples just sitting about here.

Deconvolution is very effective for astrophotography and microscopy. In astro photography they have an array of point sources they can try to infer the PSF from, making their life easy. In addition the subjects have essentially no depth.

For microscopy we can divide the options into 2d and 3d deconvolution. 3d deconvolution generally uses a pre-measured 3d PSF and applies it to a stack of images of different depths to try to create a more detailed and higher contrast image. 2d deconvolution can either take a pre-measured PSF or some combination of blind deconvolution or a synthetic PSF and apply it to a single image.

It would be interesting if in the future we could apply aspects of 3d deconvolution to image stacking, since they share a little in common. I suspect that there would be quite a few challenges though. Processing speed, vastly different subjects(especially in transparency and depth), not having premeasured PSF for optics, etc.

But for right now, we are interested in 2d deconvolution, where we have few to no good areas in the image to try and measure the PSF like in astrophotography.

Also there is the question of, when to sharpen. Before you stack? After you stack? I suspect that sharpening before will have an impact on how some details are stacked using dmap. Sharpening before stacking with pmax should obviously impact the amount of images noise as well. While I havent got to testing this directly, we will get to see a bonus effect of sharpening before stacking in just a second!

All images here were stacked in zerene using pmax. The sensor was 22.5mm wide(and quite dusty at the time :oops: ), full sized images are 6000 wide, and it was shot at an effective F59-ish. So we are looking at extreme diffraction. A side note, I dont think that deconvolution will be noticeably better on images that are barely diffraction limited, but I have not investigated yet.

The first set is testing sharpening and deconvolution before stacking. Deconvolution was done using a program called piccure+, since it can batch process images.

No Sharpening:
Image

Sharpening before stacking with lightroom:
Image

Sharpening before stacking with Piccure+:
Image

Fullsized images of this set are here: http://www.thelostvertex.com/uploads/pm ... ngFull.zip

If you took the time to flash between these images with new tabs or by downloading them, you will have noticed something interesting. They dont line up! I think the no sharpening image looks worst, and the piccure image looks best.If you download the fullsized images and flash between them, you will notice the alignment seems change at the same rate as image sharpness/contrast with NoSharpening>Lightroom>Piccure. Hopefully Rik weighs in his thoughts on this, since I didnt expect to see that.

The next series of images is looking at various deconvolution software after stacking was completed using the previous "no sharpening" image. I also will include a couple images sharpened with a USM matched to a deconvolution software.*

No sharpening:
Image

Deconvolution with Focus Magic:
Image

Unsharp mask matched to Focus Magic:
Image

Deconvolution with Piccure+
Image

Unsharp mask matched to Piccure+:
Image

Deconvolution with RawTherapee(from nosharpen jpeg to jpeg):
Image

Fullsized images of this set are here: http://www.thelostvertex.com/uploads/pm ... ngFull.zip

Ive done other tests and this seems pretty representative of what I have seen. Rawtherapee does not do very well and offers limited control (max radius is 2.5px, and this is the best I could do to show some change in detail while limiting artifacts on such a heavily diffraction limited image). Focus magic does ok here. On some images I have tested it has done a little better than this, and some a bit worse. The unsharp mask paired with Focus Magic is certainly a lot noisier. Piccure seemed the best from this set to me. In every test I have done so far between piccure and focus magic, piccure seems to have done as well or slightly better. Again the unsharp mask paired against it is a fair bit noisier and doesnt look quite as good.

I have done a several other images with the previously mentioned softwares(and a couple others), and in blind testing my brother, he always chose the deconvolved image as looking better to him. So atleast for very heavy diffraction, I see some potential.

Ill stop here and not rant about how much I do not like any of the deconvolution software that is out there, and how poorly designed it all...I'll stop ;)

*Comparisons where made by opening the deconvolved image as a layer, and the unsharpened image on top. The unsharpened image blend mode was set to difference, and an unsharp mask live filter was applied to it, then adjusted until it matched as close as possible to the deconvolved image. The hope here is to get as close to an apples to apples comparison as I can

rjlittlefield
Site Admin
Posts: 23626
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

enricosavazzi wrote:Rik,

at least in principle, with geometric optics you can ray-trace back from an image point through a refractor/reflector system to the corresponding source point. With diffracted light, you cannot.
Enrico, I'm still not following the argument. Yes, with geometric optics I can trace an individual ray back to a corresponding source point. But what the sensor records is the sum of all such rays, integrated over the entire aperture. In the special case of an aberration free lens, at perfect focus, all rays trace to the same source point. But as soon as the lens has aberrations or is not perfectly focused, then each point on the sensor is illuminated by rays from some blurred area on the source. Equivalently, each point on the source contributes to some blurred area on the sensor. That's what the point spread function is all about. The shape of the PSF for aberrations and defocus is different in its details than the PSF for diffraction, but I'm not seeing any great conceptual difference. Try again, please -- what am I missing?

--Rik

rjlittlefield
Site Admin
Posts: 23626
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

TheLostVertex wrote:If you took the time to flash between these images with new tabs or by downloading them, you will have noticed something interesting. They dont line up! I think the no sharpening image looks worst, and the piccure image looks best.If you download the fullsized images and flash between them, you will notice the alignment seems change at the same rate as image sharpness/contrast with NoSharpening>Lightroom>Piccure. Hopefully Rik weighs in his thoughts on this, since I didnt expect to see that.
My experience is that every time you do anything to the source images in a focus stack, you will slightly alter the alignment. Likewise for running the same source images through an assortment of programs. Zerene Stacker, Helicon Focus, Photoshop, and CombineZ will all produce slightly different alignments. I'm used to seeing this, since the issue has turned up every time I've done a comparison over the last 13 years or so.

Unfortunately I cannot provide a satisfying explanation for why this happens.

I can explain that in Zerene Stacker the alignment procedure works by minimizing the sum squared deviation in luminance values, searching whatever parameter space is allowed by the Shift, Rotate, and Scale options. This is done in a multiresolution framework that lets it run tolerably fast, but the end result is essentially the same as if you just spent a while stretching and scrubbing each image until it matches the previous image as well as possible, where "matches" means that criterion about sum squared deviation in luminance values.

Unfortunately, that explanation about "how it works" fails to give any good insight about why altering the sources might alter the alignment in some systematic way, as it seems to be doing here.

One possibility is that the overall composition is asymmetric, say bolder and brighter on one side than the other, and that this interacts with the filtering process in such a way as to produce a slight shift, according to Zerene Stacker's criterion. That would be another variant of the same effect that often causes problems with high mag images, where the "blooming" of objects as they go in and out of focus manages to mislead the alignment process.

But I expect there are other possibilities. Identifying and teasing them apart has always promised to be a long and difficult process, with no clear value, so I've never taken time to do it for even a single stack.

--Rik

Lou Jost
Posts: 5991
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Post by Lou Jost »

"in Zerene Stacker the alignment procedure works by minimizing the sum squared deviation in luminance values"

I've often wondered what the alignment criterion was. Is this the most common choice? Might a lower exponent give better results, since often the highlights and reflections/flares move around from frame to frame, and the squaring makes these highlights dominate the results?

rjlittlefield
Site Admin
Posts: 23626
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

An assortment of thoughts, roughly in order of appearance in the thread...
Beatsy wrote:The problem is that for deconvolution to work effectively it needs to be supplied with an accurate point spread function. Blind deconvolution (no PSF) doesn't cut it because a good general purpose sharpening algorithm will do pretty much as well, in less time.

Having said that, blind deconvolution is a very good sharpening algorithm in it's own right. Try RAWTherapee (free).
The issues may be more clear if we tighten up the terminology a bit.

In standard usage, the phrase "blind deconvolution" means that you only have the blurred result, and you have to infer both the original image and the PSF.

Strictly speaking, that's not what RL Deconvolution in RawTherapee does.

Instead, you the user have to specify the PSF -- that's what the radius slider does -- and then the RL deconvolution algorithm figures out the most likely source image to have produced the blurred image that it's working with, under the assumption that the inputs have Poisson distribution such as shot noise. See https://en.wikipedia.org/wiki/Richardson-Lucy deconvolution for a bit more detail.

So, RawTherapee by itself does not do blind deconvolution. However, it's commonly used in a workflow that does do blind deconvolution, with the human user providing both the search method and the quality metric, by adjusting the sliders until the image looks good. It's another example of "sharpening to taste".

In comparison, from a quick scan of the piccure documentation it looks to me like piccure does blind deconvolution all by itself. I say this because I don't see anything in the user interface about setting "radius" or anything else I recognize as being a specification of PSF, especially for such complicated and spatially variable PSF's as coma, which it specifically claims to handle. The user interface looks to be a matter of giving piccure some guidance about your criteria, and then it figures out the details from there.

Pau wrote:Comparing the images posted side by side by Rik, I still find the RL deconvolution version better than the USM: clearer detail and less chromatic aberration. Detail fake or not it seems an excellent algorithm.
I agree. My comparison against USM was not intended to argue that conventional sharpening is just as good as RL deconvolution --- just that it's a lot closer than might seem to be implied by the comparison that Beatsy posted.

billjanes1 wrote:Roger Clark, an astrophysicist with extensive digtal image processing experience, has published a three part treatise on image restoration and sharpening and he discusses Richardson-Lucy deconvolution and the unsharp mask (deconvolution methods are often described as image restoration rather than sharpening, since the former can actually restore image detail). His methods are more complex than many would likely apply, but the articles are well worth reading.
Clark's articles are a good read. I do have some quibbles with Clark's discussion. For example in one sequence he starts with an image that he says is very sharp, then blurs it, then restores it, and finally writes that "the restored image has slightly more detail than the original. This process could go even further: the original image could be up sampled and then deconvolved to reveal even more detail." That last sentence strikes me as an extraordinary claim that requires extraordinary evidence; none is provided. However, the provided illustrations do make a convincing story about the benefits of deconvolution over unsharp masking at same pixel size.

Recording another reference...

I tend to be interested at the level of the math. For my purposes there's a series of blog posts by Jack/AlmaPhoto that I have found very helpful. He is most interested in "capture sharpening", by which he means restoring an image that has been degraded by some combination of lens characteristics, diffraction, and sensor characteristics. At http://www.strollswithmydog.com/deconvo ... -aperture/, middle of the page, he makes what seems to me a very important point, that
as the f-number is increased the shape of the modeled Total MTF curve starts to look less and less like a Gaussian because the increasingly strong diffraction component becomes more and more dominant in shaping the overall curve. Diffraction does not look like a Gaussian MTF. This makes a Gaussian PSF alone less and less suitable for deconvolution.
...
It looks like deconvolution by a Gaussian PSF alone is no longer suitable at f/16 (and earlier).
That last comment is especially worrisome because a) he's talking about effective aperture, b) his example problem is using a full frame sensor, and c) programs like RawTherapee seem to provide only Gaussian PSF. The equivalent on APS-C would be around f/11, and that's versus say the f/20 that we get when we use a 10X NA 0.25 microscope objective. I have no solid idea how much using a Gaussian PSF degrades the result of deconvolution when the original image was mainly degraded by diffraction, but the comments & illustrations of this author's discussion suggest it could be significant.

Other relevant posts by Jack/AlmaPhoto can be found by using his blog's search box for "deconvolution".

TheLostVertex wrote:Piccure seemed the best from this set to me. In every test I have done so far between piccure and focus magic, piccure seems to have done as well or slightly better.
Thanks for letting me know about piccure. That one is new to me.

Lou Jost wrote:"in Zerene Stacker the alignment procedure works by minimizing the sum squared deviation in luminance values"

I've often wondered what the alignment criterion was. Is this the most common choice? Might a lower exponent give better results, since often the highlights and reflections/flares move around from frame to frame, and the squaring makes these highlights dominate the results?
I think of the core problem as being a question of what image aspects to look at, rather than exactly how to look at them. In concept we want to ignore misleading features like specular reflections that move, while giving higher weight to well behaved and nearly focused features that don't.

Altering the exponent on a look-at-all-pixels metric may have some effect on alignment, but the overall effect could go either way. Reducing the exponent gives less weight to specular reflections, but it also gives less weight to other focused detail, while giving more weight to small variations in unfocused areas. I vaguely recall playing around with this aspect back during initial development, and concluding that it wasn't worth the cost of providing a user control which would then have to be documented and supported.

To my mind a better use of development time would be to allow scale adjustment per step to be explicitly specified, and/or to be determined automatically from frames near the middle of the stack instead of the ends (which are often less well behaved, especially for microscopy), and/or to allow restricting the area of comparison so as to include just the subject of interest and not for example the surrounding background areas in a hand-held stack, or the crisp circular boundary of the field stop in some microscopy stacks. All those things are on the "to be considered" list for future enhancement, but they're not near the top and realistically they're not going to get done any time soon.

--Rik

rjlittlefield
Site Admin
Posts: 23626
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

One more detail probably worth recording...

When I first tried RawTherapee, I could not figure out how to interactively see the effect of the sliders. The trick (thanks, Beatsy) was to realize that I had to view at 100%. When viewing at less than 100%, the on-screen image does not change as I adjust the sliders.

I also had some trouble figuring out that it's necessary to explicitly enable sharpening. By default sharpening is disabled, although the sliders that control it are still active. The control for enabling sharpening is that little dark gray "power" button at the tip of the pointer in this screengrab:

Image

When the feature is enabled, the power button turns light gray and is easily seen. When it's off, as by default, shown here, it's a lot less visible.

The button is also a handy way to turn sharpening on/off for comparison purposes.

--Rik

Lou Jost
Posts: 5991
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Post by Lou Jost »

Rik, that "Strolls with my dog" blog is interesting. The author answers another question that has come up before on this forum, regarding the performance of lenses for different sensor sizes. He tested wavefront error for good prime lenses of different systems, from Phase 1 to tiny 1/2.3" Panasonic sensors. The smallest relative wavefront errors were found in the lenses made for the smallest sensors. Lenses for FF or medium format were worse than lenses for MFT or smaller. They have to be, since they are imaging on much smaller pixels. It is interesting to see that the FF Nikon 85mm lens actually was worse than the medium format lenses (all analysis done only at the centers):

http://www.strollswithmydog.com/testing ... -model-ii/

This suggests that when we use a reversed lens for macro photography, all else being equal, we shouldn't be using medium format lenses or FF lenses, but MFT or smaller lenses. This should be true even for medium format cameras. I've already seen that my reversed 60 Oly MFT macro lens is better than my 60mm Nikkor D macro lens reversed, when both are tested on an APS sensor.

In the same article the author shows that sensor size does correlate well with image resolution per picture height. But there is quite a bit of variation. The APS-C system he tested (Sony A6300 24Mp) had only a very slight (10%) resolution advantage per picture height relative to the MFT system (PEN F 20M). On the other hand the Nikon J5 with 1" sensor has about the same resolution per picture height as the Oly and Sony.

TheLostVertex
Posts: 318
Joined: Thu Sep 22, 2011 9:55 am
Location: Florida

Post by TheLostVertex »

rjlittlefield wrote: In comparison, from a quick scan of the piccure documentation it looks to me like piccure does blind deconvolution all by itself. I say this because I don't see anything in the user interface about setting "radius" or anything else I recognize as being a specification of PSF ...
From my testing, it appears to be a blind deconvolution method, but its hard to say for sure. The "rendering" settign behaves a little bit like setting a radius, but it very well can just be setting a more narrow search criteria, or a guess to the size of the PSF it is looking for (see the grid size setting in Photiosity2, next paragraph).

Some other blind methods I have looked at are SmartDeblur which you can find the open source version here http://yuzhikov.com/projects.html The author has since abandoned the project to make a commercial version, which I havent tried here http://smartdeblur.net For mac, I have tried Photiosity2 http://www.photiosity.com/geprol/Photiosity2.php which is a blind deconvolution application, it also lets you save the PSF it generates which is nice. I didnt think the results from either of these applications worked well from a photographic/artistic sense. They certainly can recover detail, but not in a way that I think looks "good" to me as a photographer.

Matlab also can do a blind deconvolution to recover an image and generate a PSF, there is a brief tutorial for doing that here https://www.mathworks.com/help/images/r ... blind.html There are also a few non-blind methods matlab supports.

So far I havent had very good luck with any of the imageJ plugins. Some of the plugins I have tried to get have been deadlinks as well :/

I am sure I am missing a couple that I have tried. The implementation and results of most of the software I have tried is not really suitable for me. So far the best I have tried is piccure+, and even then there are some annoying interface quirks.

And as a small side note, deconvolution is slow, especially for decent results. To batch process 77 images in piccure at its highest quality setting, took somewhere close to an hour. That far longer than stacking the images would take. Even just doing a single image in most of the software I tried would take minutes. Compared to the near instant results of an USM. I do think the results of a decent deconvolution software are better than an USM, but there is a trade off in time spent setting it up, and processing it.
rjlittlefield wrote: I could not figure out how to interactively see the effect of the sliders. The trick (thanks, Beatsy) was to realize that I had to view at 100%. When viewing at less than 100%, the on-screen image does not change as I adjust the sliders.
A side note, if you look at the image you posted, to the right side of where it says "Sharpening" you will see "1:1" written. Anything in RawTherapee with that icon next to it will only show its effects onscreen when viewed at 100%(or when exported). I initially thought there was a bug when I was testing out the RL Deconvolution in it some time ago, but found documentation explaining the problem when I was double checking to write a bug report. Quite an annoying "feature" I thought :lol:

Post Reply Previous topicNext topic