Sample density needed to resolve Rayleigh features

A forum to ask questions, post setups, and generally discuss anything having to do with photomacrography and photomicroscopy.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

Lou Jost
Posts: 5943
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Post by Lou Jost »

...diffraction is routinely interpreted as a loss even though underlying issue is that the information is present but got redistributed in a way which both makes it cumbersome to access and impedes use of other information.
Can you explain that or give a reference? Information is not the same thing as energy. If a diffracted photon can end up anywhere, it ceases to convey information about the location of its point of origin. This seems to me like a genuine loss.

Ultima_Gaina
Posts: 108
Joined: Sat Jan 28, 2017 11:19 pm

Post by Ultima_Gaina »

Lou Jost wrote:
...diffraction is routinely interpreted as a loss even though underlying issue is that the information is present but got redistributed in a way which both makes it cumbersome to access and impedes the use of other information.
Can you explain that or give a reference? Information is not the same thing as energy. If a diffracted photon can end up anywhere, it ceases to convey information about the location of its point of origin. This seems to me like a genuine loss.
That photon's trajectory is not random. It's very much determined by the transfer function of the medium (i.e. lens, focal length, aperture and camera combination). If you know that transfer function, the original information can be recovered, by applying the inverse function. Canon's DPP deconvolution algorithm is a great practical example.

Lou Jost
Posts: 5943
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Post by Lou Jost »

We must be talking about different time scales. That photon is indeed random, and its trajectory cannot be reconstructed. If you have lots of photons and average them, then yes, you can reconstruct the source. If that's what you meant, I am with you.

Ultima_Gaina
Posts: 108
Joined: Sat Jan 28, 2017 11:19 pm

Post by Ultima_Gaina »

Lou Jost wrote:We must be talking about different time scales. That photon is indeed random, and its trajectory cannot be reconstructed. If you have lots of photons and average them, then yes, you can reconstruct the source. If that's what you meant, I am with you.
Yes, while the behavior of individual photons is unpredictable, indeed, the diffraction is a statistical effect. A certain amount of information destined to a specific pixel is scattered over its neighboring pixels, based on a predetermined transfer function, specific to the medium it travels through. This is why the original information destined to that pixel can be pretty much recovered, when the transfer function is known. But that involves heavy processing, and that's still too much for current cameras (DPP is doing the deconvolution externally)

Lou Jost
Posts: 5943
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Post by Lou Jost »

A certain amount of information destined to a specific pixel is scattered over its neighboring pixels
What I am having trouble with is the use of "information" where most physicists would say "energy".

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Lou Jost wrote:
...diffraction is routinely interpreted as a loss even though underlying issue is that the information is present but got redistributed in a way which both makes it cumbersome to access and impedes use of other information.
Can you explain that or give a reference?
I took palea's comment to refer to the reduction in contrast due to diffraction, below the cutoff frequency. In that regime, the "loss of MTF" due to diffraction blurring really is just a matter that information has gotten spread out, not gone forever.

Above the cutoff frequency, things are completely different. MTF=0 at and above cutoff implies that any infinite grating with sinusoidal intensity profile will be imaged the same as uniform gray. That's definitely a loss. It's also a lot of words, which implies a lot of preconditions ("assumptions"), so there's always some potential for tricks that work around those. Multiple exposures with different structured illumination come to mind.

The issue that a photon may end up anywhere (and it always can!) is a matter of noisy sampling. Always a problem, but a different one.
palea wrote:I don't feel it's so much about right and wrong as it is completeness of understanding.
I like what I think I'm hearing here. I agree, more complete understanding is definitely a good thing.

I vividly recall how the forum's understanding of DOF has evolved from standard formulas and often misapplied rules of thumb like "smaller sensors give more DOF", to our current understanding which is surely an order of magnitude more complete and correct.

If we can make similar improvements in understanding sampling and image reconstruction, that would be superb!

--Rik

Ultima_Gaina
Posts: 108
Joined: Sat Jan 28, 2017 11:19 pm

Post by Ultima_Gaina »

Lou Jost wrote:
A certain amount of information destined to a specific pixel is scattered over its neighboring pixels
What I am having trouble with is the use of "information" where most physicists would say "energy".
The energy in itself is not information, but it can carry information.
Usually, the information is modulated on top of the energy carried between a transmitter and a receiver.

A high amount of energy destined to a pixel can contain the "white" information, while a lower quantity of energy can contain the "black" information.
"Black" and "white" is information, in that context, because the amount of light transmitted is not random. The transmitter intended to transmit "black" or "white", using the energy of the light as a carrier.
The information theory studies how well that 'black" and "white" information is transferred between the transmitter and the receiver, and what are the methods to minimize the transfer errors while maximizing the channel capacity (e.g. compression, encoding/decoding, redundancy, channel adaptation, etc).

mawyatt
Posts: 2497
Joined: Thu Aug 22, 2013 6:54 pm
Location: Clearwater, Florida

Post by mawyatt »

Ultima_Gaina wrote:
Lou Jost wrote:
A certain amount of information destined to a specific pixel is scattered over its neighboring pixels
What I am having trouble with is the use of "information" where most physicists would say "energy".
The energy in itself is not information, but it can carry information.
Usually, the information is modulated on top of the energy carried between a transmitter and a receiver.

A high amount of energy destined to a pixel can contain the "white" information, while a lower quantity of energy can contain the "black" information.
"Black" and "white" is information, in that context, because the amount of light transmitted is not random. The transmitter intended to transmit "black" or "white", using the energy of the light as a carrier.
The information theory studies how well that 'black" and "white" information is transferred between the transmitter and the receiver, and what are the methods to minimize the transfer errors while maximizing the channel capacity (e.g. compression, encoding/decoding, redundancy, channel adaptation, etc).
In our discussions here I would think black as a lack of photons, as pure black as no detectable photons.

All this also begs the question does "information" have mass, and if so then bounded in speed?

I recall from very long ago in grad school (yes very long ago!) something called Phase Velocity in wave theory which can exceed light speed. An analogy the prof used was a ocean wave hitting the sea wall and "crest" of the wave has a speed that is unlimited and goes from + infinity to - infinity as the wave passes thru a normal angle to the wall. If one could somehow "encode" information on the wavefront crest then it could travel along the wall at unlimited speed parallel to the wall.

I'm not a communications expert, but thought this was intriguing and recall somewhere way back where researchers claimed to have exceeded light speed communication with a setup that used this orthogonal concept.

Best,
Research is like a treasure hunt, you don't know where to look or what you'll find!
~Mike

Ultima_Gaina
Posts: 108
Joined: Sat Jan 28, 2017 11:19 pm

Post by Ultima_Gaina »

mawyatt wrote:
In our discussions here I would think black as a lack of photons, as pure black as no detectable photons.
Not necessarily. Any levels of energy are OK as long as the receiver can differentiate them and knows the encoding/modulation rule used by the transmitter.

Moreover, it can even be the other way around. The transmitter can encode "white" as low energy, and "black" as high energy (think at the good ol' negatives). If the receiver knows the rule and has enough sensitivity to differentiate between the two energy levels (even after the transport channel altered them), then pure "black" and pure "white" can be replicated by the receiver.

The speed at which the information is transmitted is dependent on the channel conditions. Think bitrate, for example. If the channel is noisy, then then the bitrate must be reduced, to minimize the transmission errors and ensure proper information recovery at the other end.

Based on the channel conditions (e.g., bandwidth, thermal noise, and interference), the information theory is able to define an upper limit for the amount of information the channel can carry (with a certain level of errors).
Image

mawyatt
Posts: 2497
Joined: Thu Aug 22, 2013 6:54 pm
Location: Clearwater, Florida

Post by mawyatt »

Ultima_Gaina wrote:
mawyatt wrote:
In our discussions here I would think black as a lack of photons, as pure black as no detectable photons.
Not necessarily. Any levels of energy are OK as long as the receiver can differentiate them and knows the encoding/modulation rule used by the transmitter.

Moreover, it can even be the other way around. The transmitter can encode "white" as low energy, and "black" as high energy (think at the good ol' negatives). If the receiver knows the rule and has enough sensitivity to differentiate between the two energy levels (even after the transport channel altered them), then pure "black" and pure "white" can be replicated by the receiver.

The speed at which the information is transmitted is dependent on the channel conditions. Think bitrate, for example. If the channel is noisy, then then the bitrate must be reduced, to minimize the transmission errors and ensure proper information recovery at the other end.

Based on the channel conditions (e.g., bandwidth, thermal noise, and interference), the information theory is able to define an upper limit for the amount of information the channel can carry (with a certain level of errors).
Image
When I said "In our discussions here" I'm referring to an image captured on a pixel. Pixels simply record number of photons captured in a given exposure period with a given efficiency. So my understanding is an image capture on a pixel, a blacker subject represents a lack of photons and a whiter subject represents an abundance of photons, in between is grey. At the pixel level a pure black would be no detectable photons and a pure white would be full of photons. In post processing after the pixel has captured the photons, these can be defined as black and white (even inverted as a negative) as required, which depends conversion resolution, curve fitting and so on.

Agree in communications simple ON/OFF (ASK) coding can have either polarity, same for FSK, PSK, BPSK and so on. Actually linear coding (AM, PM, FM) can have either polarity and as you say if the receiver knows this or something about what's being received.

When I mentioned communications speed this was how fast a bit of information could be sent, I recall some research outlining some form of orthogonal communications similar to the wave crest analogy mentioned that communicated a bit at >Light speed in the medium. Which brings up the Quantum-Entangled Photons and the amazing behavior they posses, but dont want to divert this thread any farther than I already have :roll:

Best,
Research is like a treasure hunt, you don't know where to look or what you'll find!
~Mike

Ultima_Gaina
Posts: 108
Joined: Sat Jan 28, 2017 11:19 pm

Post by Ultima_Gaina »

mawyatt wrote:
When I said "In our discussions here" I'm referring to an image captured on a pixel. Pixels simply record number of photons captured in a given exposure period with a given efficiency. So my understanding is an image capture on a pixel, a blacker subject represents a lack of photons and a whiter subject represents an abundance of photons, in between is grey. At the pixel level a pure black would be no detectable photons and a pure white would be full of photons. In post processing after the pixel has captured the photons, these can be defined as black and white (even inverted as a negative) as required, which depends conversion resolution, curve fitting and so on.
Ok. In the context of information theory, sampling rates, etc,
we should make the distinction between absolute black, which is the total abscence of photons and pure black as defined in photography, which is everything below a certain energy threshold. The same way, pure white is not an infinity of energy, but everything above a certain upper level. Gray is everything in between, with various levels, except for the trivial case, when we only transmit 1 bit of information, meaning only pure white or pure black.

Lou Jost
Posts: 5943
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Post by Lou Jost »

when we only transmit 1 bit of information, meaning only pure white or pure black.
In a photographic system, one pixel is not one bit of information. Values in between also convey real information. In most monochrome cameras there are 256 meaningful levels for each pixel. So eight bits of information.

Ultima_Gaina
Posts: 108
Joined: Sat Jan 28, 2017 11:19 pm

Post by Ultima_Gaina »

Lou Jost wrote:
when we only transmit 1 bit of information, meaning only pure white or pure black.
In a photographic system, one pixel is not one bit of information. Values in between also convey real information. In most monochrome cameras there are 256 meaningful levels for each pixel. So eight bits of information.
Sure. That was only a trivial example of the most elementary theoretical sensor, meant to highlight that pure black is not necessarily the total absence of photons/energy, but it is defined by a threshold agreed between the transmitter and the receiver, based on the channel conditions.

When it comes to bit depth, 256 levels is a limitation coming from the 8bit JPEG standard.
But modern sensors can sample the light using 10, 12 or even 16bits.

Even so, when the transmitter has only 1 bit per pixel of information to transmit, when the original image is only made of pure black and pure white, with no in-between gray levels (think of the very first "pixelated" digital images from decades ago), the real information transmitted is still 1 bit/pixel, no matter how many bits are used to encode that information.
In this situation, the other bits used to encode that 1 bit worth of information are simply redundant and do nothing else than wasting the channel capacity. Compression can help is such situations, by adapting the transmitted bitrate to the real information bitrate.

wpl
Posts: 22
Joined: Thu Jun 28, 2012 9:43 am
Location: New Mexico, USA

Post by wpl »

To tie up the loose end concerning sampling vs. binning, I found a relevant article: http://personal.sron.nl/~jellep/spex/ma ... lse95.html. The sampling theorem does apply to binning also. If the original image is bandwidth limited, then the original image can be reconstructed from the bin information provided the bin frequency is greater than twice the bandwidth.

mawyatt
Posts: 2497
Joined: Thu Aug 22, 2013 6:54 pm
Location: Clearwater, Florida

Post by mawyatt »

wpm,

Thanks for the link.

Just took a quick look at this.

What first struck me is that I believe the 1st equation 7.1 is incorrect, the Fourier transform equation has omitted the time domain function f(t) within the integral!!

I'll spend more time on it later.

Best,
Research is like a treasure hunt, you don't know where to look or what you'll find!
~Mike

Post Reply Previous topicNext topic