Deconvolution of diffraction/Capture one pro 10

A forum to ask questions, post setups, and generally discuss anything having to do with photomacrography and photomicroscopy.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

austrokiwi1
Posts: 350
Joined: Sun Sep 14, 2014 10:53 am

Deconvolution of diffraction/Capture one pro 10

Post by austrokiwi1 »

{I was stuck for the relevant title for this thread. My in-depth knowledge and algebra skills are not as good as they should be for understanding the technical details. So, I hope those more experienced will be patient with me and give me a constructive steer when it is clear I am not correct. }
Over the last year, I have used Capture one pro 9 (for Sony). Capture one is the raw conversion engine produced by Phase one. My initial interest in the software was for tethering my Sony A7rII. Capture one, to my knowledge, it is the only software that allows tethering of the A7rii. Previously to Capture One I had either just produced JPEGs or used the Sony's software. I have had no significant experience with Light Room or other Raw processing engines, so I cannot compare Capture one to other Raw Engines.

In December Phase One brought out version 10 of the software and I was really intrigued with two of the new features, diffraction control and halo control. I have been using the new version (just the Sony version) now for a couple of weeks. I watched several Phase-One tutorials on the new upgrade (on YouTube), and in one of those it was noted the new diffraction correction tool operated by deconvolution. I had never heard of the term before so I have spent some time trying to understand it. Much of what I read on the net (including this forum) was related to Microscopy, and Telescopy (As a related aside: I was fascinated that, until they added the corrective optics to the Hubble, they relied on deconvolution to "restore" the images they obtained from that (previously) flawed telescope).
What I have, rightly or wrongly, distilled (directly relating to deconvolution), is that diffraction is a convolution. That convolution (probably amongst others) is described mathematically by the point spread function(PSF) which describes how a point source is blurred by the optical system the PSF applies to. In simple(figurative) terms if the PSF is accurately known, or can be closely estimated/guessed, it is possible to reverse the blurring caused by diffraction. Of course, the catch to this is; the other causes of blurring, the processing power required, the type of sensor, and the information requirements of the deconvolution algorithm. All those other factors (and likely others) place limits on how successful the deconvolution of diffraction will be.

I understand that to get the best out of the deconvolution algorithm I would need to ensure the following:
• Very even lighting of the subject, minimising extremes between shadows and highlights.
• That the higher the bit rate the better. So, for me uncompressed 14 bit raw must be used. Am I correct in the assumption that 32 (or even 48 ) bit images (From Digital Scan backs) would be better suited to diffraction Deconvolution?
• That Lens data would be necessary to get the best possible results (so limiting the utility of non-native objectives (in practice I found this to be incorrect!!)
I assume that the deconvolution “formula” used in the software is an interpolation (is that the right word?) of a much more complex Algorithm.
In regards to ensuring even lighting. When I first installed the software. I took some photos (at apertures above and below the DLA) of a US penny. I used a lighting system that highlighted the lustre and the results on applying the Diffraction control were horrible. That’s when I decided to try to understand what the Diffraction control was doing.
After research. I then tried very diffuse indirect lighting I used the native FE 90mm F 2,8 macro, based on the assumption that the lens data would make for better results. I started out with 1-1 shots and just kept on getting bad results. Finally, I achieved good, repeatable, results (with the Diffraction control) when I imaged an American Silver Eagle at 0.6X magnification.
My A7rII sees diffraction starting at around F 6.9 and full DLA is around F8 so I took comparison shots at F5.6, F 9, F 11 and F16. As I expected applying the deconvolution at F 5.6 saw no change in the image (it was probably pointless to try and I suspected, if there was any change, I might see a deterioration in the image). At F11 there was a slight improvement in the image. At F 16 I could see no significant difference between the original and deconvolved images. At F 9 there was a significant improvement the deconvolved image which almost matched that of the one taken at F 5.6. The Software was using the manufacturers profile for the lens that it read from the meta data. There was also the option of using a Generic lens setting, I tried it and there was no noticeable difference.
Here’s a comparison set of photos. The images on the left are the original while those on the right are the same image but with the Diffraction correction applied. The images here are reduced Jpegs, I started with 14bit (82mb file size) raw, and only applied the Diffraction control, no other processing was undertaken. The images were then exported as 8-bit Tiff which were then converted to Jpegs of a size suitable for posting here. The difference in much more noticeable in the full-sized Tiffs. Note: these are crops:


Image

After the playing, around at that point I had gained the impression that the diffraction control was useful but within quite narrow (I expected this) parameters. I had at this stage assumed an adapted objective would not see any improvement and, based on my experiences with the 90mm, higher magnification and even smaller apertures would be out of the question. I was really surprised when I tried out the diffraction control on some images produced by a reversed Schneider Kreuznach APO-Componon 40 F 2.8 HM enlarger lens. Used at F 2.8 and at 5.14X magnification Which equates (am I correct?) to an effective aperture of F 17.19. the Diffraction correction tool, unexpected by myself, produced a very usable improvement in the observable diffraction. Note the target is a 10 Euro note and the letters and figures in the 100% crop are only 0.3mm high. The note did not sit fully flat so the right-hand half of the image is out of focus. This was a single shot image not a stack.


Image


I have been able to repeat these results several times however it is reasonable to expect there are technique and lens issues that are responsible for what I have observed. I am starting to be very suspicious about my example of the Fe 90mm F 2.8. It is one of the earlier produced examples and there are reports of people finding considerable manufacturing variance between examples of this lens. Assuming for now that the lens is OK it is possible that at 1-1 the lens is performing right at the edge of its optimal magnification range, and this explains my experience. However, this is only one of several possibilities. The SK 40mm is performing well within its optimal magnification(reversed) and it may well be the better performer of the two lenses I have tried this Diffraction control option on.
I would appreciate any constructive comments and particularly further enlightenment on Deconvolution. I certainly don’t believe phase one has produced a means of editing away all diffraction blurr but it is certainly pointing towards future possibilities. I would like to read the experiences of other Capture one users, particularly those who can use this Diffraction tool on 16bit images.
Last edited by austrokiwi1 on Sun Jan 01, 2017 5:00 am, edited 3 times in total.
Still learning,
Cameras' Sony A7rII, OLympus OMD-EM10II
Macro lenses: Printing nikkor 105mm, Sony FE 90mm F2.8 Macro G, Schneider Kreuznach Makro Iris 50mm , 2.8, Schnieder Kreuznach APO Componon HM 40mm F2.8 , Mamiya 645 120mm F4 Macro ( used with mirex tilt shift adapter), Olympus 135mm 4.5 bellows lens, Oly 80mm bellows lens, Olympus 60mm F2.8

Beatsy
Posts: 2105
Joined: Fri Jul 05, 2013 3:10 am
Location: Malvern, UK

Post by Beatsy »

Great post covering a topic that frequently crossed my mind. I did similar experiments with RawTherapee, free software that offers Richardson Lucy blind deconvolution (the same as used for Hubble - except that wasn't blind as they knew the convolution parameters). I concluded it wasn't increasing resolution per-se (removing diffraction) but it did a great job sharpening diffracted images with fewer artefacts compared to other methods (wavelets, photoshop, lightroom etc). I may be wrong in that conclusion (that there is no increase in resolution), but I really don't think there's a free lunch on offer here :) I'll leave it to "those who know" to expand that. I kept RawTherapee in the toolbox and generally use it when I want to sharpen severely cropped, hi-mag images.

Online
enricosavazzi
Posts: 1474
Joined: Sat Nov 21, 2009 2:41 pm
Location: Västerås, Sweden
Contact:

Post by enricosavazzi »

From a theoretical point of view, information lost through diffraction is truly gone, and cannot be restored as true information by any post-processing algorithms. However, with the right assumptions and right parameters, images post-processed by deconvolution are indeed perceived as visually improved.

Image degradation produced by certain optical aberrations can instead be partly restored by proper algorithms and parameters, because at least some of the original information is still present in the image, although in a different form not immediately obvious to our vision.
--ES

austrokiwi1
Posts: 350
Joined: Sun Sep 14, 2014 10:53 am

Post by austrokiwi1 »

just to to ensure no one has misunderstood my understanding when I referred to reversing/correcting diffraction I was writing figuratively not literally.

Thanks for your comments Enrico. Am I correct in saying the information loss, due to diffraction, is less at F9 than it is at F11? Is it also correct that the very design of the Bayer sensor places considerable constraint on how effective Deconvolution can be. I am assuming a high resolution sensor at or just above the DLA will still produce an image with more information than a lower resolutions sensor at the same aperture
Last edited by austrokiwi1 on Mon Jan 02, 2017 8:46 am, edited 1 time in total.
Still learning,
Cameras' Sony A7rII, OLympus OMD-EM10II
Macro lenses: Printing nikkor 105mm, Sony FE 90mm F2.8 Macro G, Schneider Kreuznach Makro Iris 50mm , 2.8, Schnieder Kreuznach APO Componon HM 40mm F2.8 , Mamiya 645 120mm F4 Macro ( used with mirex tilt shift adapter), Olympus 135mm 4.5 bellows lens, Oly 80mm bellows lens, Olympus 60mm F2.8

Online
enricosavazzi
Posts: 1474
Joined: Sat Nov 21, 2009 2:41 pm
Location: Västerås, Sweden
Contact:

Post by enricosavazzi »

austrokiwi1 wrote:Am I correct in saying the information loss, due to diffraction, is less at F9 than it is at F11?
All other factors being the same, yes. Diffraction is not the whole story, though. Real lenses tend to display lower amounts of some aberrations when the aperture is stopped down, so, although diffraction has a smaller effect at a larger aperture, spherical etc. aberrations are usually higher at a larger aperture in a non-ideal lens. These aberrations might even have a higher effect in perceived image degradation than diffraction.

To put this in semi-qualitative terms, diffraction in an ideal lens takes place around the edge of the aperture (or edge of one or more of the optical elements, wherever an edge happens to limit the cone of light that enters the lens). The length of the aperture perimeter is proportional to the square of the aperture diameter, while the open surface of the aperture is proportional to the cube. So the aperture surface (unaffected by diffraction) grows much faster than the length of the perimeter (affected by diffraction) by opening the aperture, and the effects of diffraction become less visible.

The Bayer filters effectively remove some of the information from the image formed by the lens in the focal plane. Optical (=anti-aliasing filter) or demosaicking algorithms (effectively, a form of post-processing) are designed to provide a visually acceptable image, but there is some unavoidable loss of real information in a Bayer sensor. I suspect that demosaicking (which can be regarded as a form of blurring/averaging/interpolating) combined with deconvolution (a form of sharpening) might not behave well.

Pixel/sensel count and size have an effect, of course. You could regard the surface of a sensel as an area where all information that is collected by different points on the sensel is averaged and blurred down to a single output value. The fewer the sensels in a given image area (with sensels larger than the diffraction limit), the lesser amount of image information remains.

Diffraction works differently than the classical circle of confusion. The former, at a single wavelength, images a point light source as concentric light and dark fringes around a central peak, while the latter (as computed by geometric optics) looks more like a gaussian bell curve. The general effect on perceived image sharpness in a typical image, however, is roughly the same.
--ES

billjanes1
Posts: 91
Joined: Fri Dec 30, 2016 1:59 pm
Location: Lake Forest, IL, USA

Post by billjanes1 »

enricosavazzi wrote:From a theoretical point of view, information lost through diffraction is truly gone, and cannot be restored as true information by any post-processing algorithms. However, with the right assumptions and right parameters, images post-processed by deconvolution are indeed perceived as visually improved.

Image degradation produced by certain optical aberrations can instead be partly restored by proper algorithms and parameters, because at least some of the original information is still present in the image, although in a different form not immediately obvious to our vision.
This is my first post on the forum and I hope the information will be helpful in demonstrating what deconvolution can do with diffraction. Bart van der
Wolf has published a URL Resolution Target using a sinusoidal Siemens star where the resolution can be determined by measuring the diameter of the circle where extinction of the target occurs. See his post for details. Here is a picture of the target taken with the Nikon D800e and the Zeiss 135 mm f/2 Apo lens. The raw flie was processed with ACR and no sharpening. A circle was drawn over the image to indicate the Nyquist limit of the sensor. The system resolves down to the Nyquist limit, and there is aliasing beyond Nyquist.

Image

Contrast can be improved with deconvolution using Focus Magic, which is a well regarded and easy to use devonvolution algorithm, but with some increase in aliasing.

Image

Here is the target taken at f/22. Resolution is well short of the Nyquist limit as indicated by the white circle which was drawn at the extinction point. Measuring the diameter of this circle and calculating resolution as outlined in Bart's post gives a value of 72 cycles/mm, which is close to the Rayliegh limit of 75 cycles/mm for green light. Contrast is reduced at all frequencies. At least, aliasing is eliminated by diffraction.

Image

Deconvolution with Focus Magic improves the contrast considerably, but there is no change in resolution as indicated by no change in the extinction diameter.

Image

Regards,

Bill

Charles Krebs
Posts: 5865
Joined: Tue Aug 01, 2006 8:02 pm
Location: Issaquah, WA USA
Contact:

Post by Charles Krebs »

Bill,

I've used Focus Magic for many years and it is an interesting program. IMO you do need to be careful not to overdo it (I guess this is always the case with programs like this).

Another "consumer" program that utilizes deconvolution and you might like to play with in Topaz labs "InFocus". It is not one of their better known plug-ins, and they have a separate sharpening program("Detail 3"). I think people think it is an overall sharpening tool will be disappointed... as things can get ugly fast if you overdo it. Nearly all Microscope images are always suffering from diffraction to some degree, and I find that if "InFocus" is used very judiciously before a "normal" sharpening routine is seems to work nicely on certain images.

https://www.topazlabs.com/infocus

austrokiwi1
Posts: 350
Joined: Sun Sep 14, 2014 10:53 am

Post by austrokiwi1 »

Charles Krebs wrote:
............................ IMO you do need to be careful not to overdo it (I guess this is always the case with programs like this).

............................................ I think people think it is an overall sharpening tool will be disappointed... as things can get ugly fast if you overdo it.
This is the first time I have really tried to understand how a particular feature in a software feature operates. I am most likely stating the very obvious; It seems that to get the best out of convolution and other sharpening tools its best to understand what parameters that feature performs best with. the problem is the people selling these software packages highlight the strengths and minimize the weakness, making finding the optimal parameters, for people like me, guess work. I suspect if I had known about deconvolution previously I would have gone for a program like Focus magic. In Capture one, the diffraction correction option is by default off. When activated the change on the screen is fast however when the edited Raw file is processed to another format the deconvolution sees a substantive increase in processing time. I assume the same applies to Focus magic and Infocus?
Still learning,
Cameras' Sony A7rII, OLympus OMD-EM10II
Macro lenses: Printing nikkor 105mm, Sony FE 90mm F2.8 Macro G, Schneider Kreuznach Makro Iris 50mm , 2.8, Schnieder Kreuznach APO Componon HM 40mm F2.8 , Mamiya 645 120mm F4 Macro ( used with mirex tilt shift adapter), Olympus 135mm 4.5 bellows lens, Oly 80mm bellows lens, Olympus 60mm F2.8

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Bill Janes, welcome to the party! Thanks very much for linking to the test chart from Bart van der Wolf. His works are often masterpieces of clarity, and I think this one is no exception.

I apologize for my own late contribution to this thread. I was struggling with what to say and how to say it. I'm still not sure this is quite what I want, but I hope it will be helpful anyway.

First, to quickly review...

In this context, the term "convolution" just means a weighted sum of shifted copies. "Deconvolution" just means trying to figure out what the original image was, given the sums and possibly the weights.

It turns out that MTF (Modulation Transfer Function) is a very helpful tool for thinking about convolution, deconvolution, and related concepts.

Here's a graph to get us started. The lower red line shows the theoretical MTF of a lens with no aberrations and a circular aperture, suffering only from diffraction. That curve starts at MTF=1 for coarse detail (on the left), falls smoothly to a cutoff where MTF=0 at some particular frequency of fine detail (nu_0), and stays 0 for all higher frequencies. Meanwhile the green line shows the MTF of what I've called a "Fantasy ideal" lens that is not troubled by such worldly restrictions as diffraction. It has MTF=1, period. If only we could get there, life would be very good indeed!

Image

Any operation that takes one image and turns it into another image has an MTF. Combining any two operations is equivalent to some more complex operation, which also has an MTF. In many circumstances, the MTF of the combined operation is simple to compute:

MTF(op1 followed by op2) = MTF(op1) * MTF(op2)

Now we can see what's needed in order to "undo" the effects of diffraction. If the result of undoing diffraction is supposed to be something like our fantasy ideal, then it must be true that:

MTF(diffraction followed by undoing diffraction) = MTF(diffraction) * MTF(undoing diffraction) = 1

and therefore

MTF(undoing diffraction) = 1 / MTF(diffraction)

Graphically, that looks like this. I'm carefully showing you only the area where things are well behaved.

Image

Of course things are not so well behaved everywhere. At any frequencies where the MTF of an operation is greater than 1, random noise in the image may be increased. The amount of the increase is proportional to the MTF value, so for 1/Diffraction, the increase can be very large indeed:

Image

Of course such large increase in noise is not desirable, so steps are taken to keep it in check.

Regardless of the mechanism for doing that, the effect on the MTF has to be very much the same:

Image

Skipping all the middle steps, that leads us to this summary graph, to which I've added a question:

Is this "deconvolution" or "sharpening"?
Image

Of course the question is rhetorical, but I think it's important.

Under assumptions that probably apply here (e.g. linear, positive, spatially invariant, radially symmetric), most everything there is to know about an image operation is contained in its MTF.

So the answer to the question is that, from the outside, you cannot tell whether the adjusted image has been treated with deconvolution or just with a well-chosen sharpening.

In fact if you search the web for a while, you'll find articles that talk about using aggressive sharpening filters as a way of recovering contrast that had been lost through diffraction. I haven't written any of those articles, but I do use that technique on virtually every high mag image I make, and I think of it just as diagrammed above: apply aggressive sharpening so as to push the MTF curve up to roughly flat until very close to cutoff.

This is not to say that the two approaches are equivalent.

The big advantage with deconvolution is that it provides a much larger framework, one that may facilitate more accurate corrections in simple cases and definitely allows feasible corrections in more complex cases such as motion blur.

On the other hand, traditional sharpening has some advantages too: it's commonly available, very fast, and comes packaged with highly interactive interfaces that make it easy for the user to tune the settings for best visual appearance.

Which one is better? Beats me. I assume it depends on both the circumstances and the user. Personally I like the concept of deconvolution because I'm a math guy, but mostly I use aggressive sharpening because I like how it's packaged.

Now, to address some of austrokiwi1's specific questions...
I assume that the deconvolution “formula” used in the software is an interpolation (is that the right word?) of a much more complex Algorithm.
I think the word you're looking for would be "approximation", the idea of doing something that's not exact but is close enough to be useful.

Most computed deconvolutions are going to be the result of some "iterative approximation" procedure that essentially takes a current estimate, evaluates how good it is, tries to improve it, and repeats the process until either the estimate is "good enough" according to some error threshold, or no further improvements can be made, or time runs out. There's probably not a "formula" at all, in the sense of a fixed sequence of arithmetic into which you plug inputs, and out of which you get an answer. Each step in the iteration certainly involves formulas, but the unspecified number of iterations provides a more powerful framework. (As an example of the increased power: solutions to polynomials of arbitrarily high order can be computed quickly using iterative approximation, but by formulas only up to order 4.)
At F 16 I could see no significant difference between the original and deconvolved images.
This result is disappointing and suggests that either the tool is unnecessarily limited or there is some control that you had set incorrectly.

Certainly there's nothing that deconvolution could do to move the cutoff frequency.

But below that frequency, a proper deconvolution should still be able to push the MTF up close to 1 until not far below cutoff. The result of that change would look substantially sharper than the original.

I note that f/16 at m=0.6 is not much different from the effective f/24 that I would get with a 20X NA 0.42 microscope objective. I'm definitely disappointed to think that such a straightforward problem may be outside the range of what Capture One can handle. (BTW, it's definitely not outside the range of what can be treated effectively with a step or two of aggressive unsharp masking. Failing to aggressively sharpen any high mag image most likely leaves useful detail unseen because of its low contrast.)
diffraction control was useful but within quite narrow (I expected this) parameters.
The way I think about this, diffraction control is especially useful on some "middle half" of the MTF curve. Near the extreme left side, not enough contrast is lost to diffraction to be a problem. Near the right side, just before cutoff, you cannot recover to MTF=1 because of the noise problem. But in the "middle half", say for 0.8>MTF>0.2, you can restore to MTF=1 without getting hit too bad by noise. So, that's the area where I think that diffraction control should be most useful.
I used a lighting system that highlighted the lustre and the results on applying the Diffraction control were horrible.
I can't be sure what's going on here, but let me offer one possibility.

As background, note that diffraction is spatially invariant and depends only on the aperture size. Those properties depend on every point of the aperture seeing the same view of the subject.

But at 1:1 and f/8 (so f/16 effective), the entrance cone is about 3.6 degrees wide. With illumination like you've described, it's a safe bet that lots of small details on the coin produced bright specular reflections for some parts of the aperture, and dark reflections for other parts of the aperture. The "utilized aperture" will not be the same for all points in the image, and probably won't even be round for most of them.

So, even the most basic assumptions of the method are violated. It would be nice if it ended up working well anyway, but certainly no surprise when it doesn't.

BTW, most users of deconvolution methods have problems that are far simpler than macro photography. If you're viewing the night sky, or a distant landscape, or a brightfield slide in a microscope, then it's a pretty good approximation that every point on the subject is seen equally well by all parts of the aperture. That "utilized aperture" problem is something that kicks in with reflected illumination at higher magnifications. It is the source of many strange effects with microscope objectives at 10X and above.

--Rik

austrokiwi1
Posts: 350
Joined: Sun Sep 14, 2014 10:53 am

Post by austrokiwi1 »

Rik: thanks for your addition to the conversation. This is just a quick acknowledgement of the info you have provided. I am still processign all the information contained in your thread so I have no substantive questions at the moment.
Still learning,
Cameras' Sony A7rII, OLympus OMD-EM10II
Macro lenses: Printing nikkor 105mm, Sony FE 90mm F2.8 Macro G, Schneider Kreuznach Makro Iris 50mm , 2.8, Schnieder Kreuznach APO Componon HM 40mm F2.8 , Mamiya 645 120mm F4 Macro ( used with mirex tilt shift adapter), Olympus 135mm 4.5 bellows lens, Oly 80mm bellows lens, Olympus 60mm F2.8

Ultima_Gaina
Posts: 108
Joined: Sat Jan 28, 2017 11:19 pm

Post by Ultima_Gaina »

Here is the way I see it the deconvolution when we deal with diffraction.

Each pixel is receiving a "certain amount" of unwanted information (light) from the rays destined to its adjacent pixels. Even less amount of information will be received from its second tier of neighbors, and so on.

So the information carried by each pixel is a well defined weighted average of information coming from itself and the cluster of pixels around it.

If we have access to that "certain amount" of information each pixel is radiating to and receiving from its neighbors, we could apply the reverse function and, for each pixel, clean-up the pollution coming from its neighbors.

The problem is that the "certain amount" is very specific to the lens-camera combo.
One needs access to detailed camera/lens design details to calculate the transfer function specific to each lens camera/combo, at all apertures.
I also suppose that through extensive lab tests, the same transfer function, specific to each camera-lens combo, could be empirically determined.

Once we know the transfer function, through a deconvolution algorithm, the inverse function will subtract, for each pixel, the exact amount of pollution coming from each of its surrounding pixels.

The result is real information recovery! And that's a major difference compared with the traditional sharpening, where information is actually lost.

I believe that this is how the Canon DPP software is working (since they know exactly how their lenses are designed and they also have access to massive amounts of tests results)

In a conceptual way, in a very noisy restaurant, where at each table (read pixel) a different language is spoken, the conversation at each table can be carried out, with no difficulty, even when high levels of foreign language "noise" (read diffraction) is received from the adjacent tables.

Edit: Checked this Petpapixel article, I found today:

https://petapixel.com/2013/07/29/hack-t ... perscopes/

The method described here is certainly based on deconvolution, trying to empirically discover the transfer function for each objective. It is most probably very similar to Canon DPP's deconvolution algorithm.
Last edited by Ultima_Gaina on Sat Feb 08, 2020 4:32 pm, edited 1 time in total.

elimoss
Posts: 41
Joined: Wed Sep 12, 2018 11:31 am

Post by elimoss »

Well, this has been informative. It's pretty clear that deconvolution sharpening is just another form of contrast enhancement -- albeit perhaps a smart one -- making informed decisions on the desired manipulations in the frequency domain.

So we are left with guessing as the best (only?) way to 'recover' information above sufficiently high frequencies. I suppose we already do some of the same with noise, color, and the like.

Post Reply Previous topicNext topic