Pixel size and image resolution

Have questions about the equipment used for macro- or micro- photography? Post those questions in this forum.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

ray_parkhurst
Posts: 3416
Joined: Sat Nov 20, 2010 10:40 am
Location: Santa Clara, CA, USA
Contact:

Post by ray_parkhurst »

Beatsy wrote:Isn't pixel shift more about improving colour than resolution? Each pixel in the final image gets a full set of (true) R, G and B values in the inputs - none interpolated from the Bayer matrix. This may have the effect of making the image look sharper (clearer), but resolution is not increased (AFAIK).
Well, in my limited and probably incorrect understanding, in the Bayer matrix, each output pixel has 50% resolution on G, and 25% resolution on R and B. So if you do a 4 pixel composite, like is done with the Sony, I believe you get full G resolution, and 50% resolution on R and B. Since most luminance info is in G, this gives you good measureable resolution, and better but still not full color.

Lou Jost
Posts: 5945
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Post by Lou Jost »

Steve, that's true for cameras that shift the pixels in integer increments (Sony, Pentax) but not those that shift by fractional pixels (Olympus).

Even the whole-pixel shifting can increase resolution over the unshifted image. But as I said, all it does is makes up for the deficiencies of the Bayer filter in the unshifted image.

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

My guess is these pixel shifting techs are all about super resolution, I have not kept up with latest technologies in this area and be honest, this stuff is often muddied by manufacturer to confuse people. I did some work on SR back in 2008, creating high resolution images from a set of images.

My method of acquisition is to drink a lot of coffee so my hands shake like crazy but still "steady" (hint subtle movement of camera), then take these images and feed them through computer algorithm to get an image with better resolution, less noise, less moire, and best of all, all I needed was coffee :D

I can not disclose too much as I have non-disclosure agreement, but the key is all about subpixel alignment of images and some trivial manipulation.

With pixel shift tech, the shift is known, this could speed up and improve final image, but I think these manufacturers are creating a lot of confusion in an effort to make it look like some black magic.

Lou Jost
Posts: 5945
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Post by Lou Jost »

The algorithms are not really very similar to each other. Sony's and Pentax's algorithms are not "super-resolution" algorithms and are not analogous to your shaking hands. They are only intended to remove the resolution limits imposed by the Bayer filter. They require very precise one-pixel-width shifts. The Olympus version on the other hand is indeed analogous to your shaky hand method, in that the shifts are not integer multiples of the pixel size, and so the resolution can actually exceed the nominal resolution of the sensor. The controlled shifting means that they can achieve a good resolution increase with a minimal number of shots, whereas the shaky-hand method requires double or triple that number of shots, and loss of many pixels on the picture edges, to increase the resolution by the same amount.

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

Lou Jost wrote:The algorithms are not really very similar to each other. Sony's and Pentax's algorithms are not "super-resolution" algorithms and are not analogous to your shaking hands. They are only intended to remove the resolution limits imposed by the Bayer filter.
Do you know that as fact? how do they remove resolution limits imposed by the Bayer filter? Do you know the bottom line algorithm why they can do that?

My guess is the bottom line algorithm is to align images and apply some calculation on them, and even if they shift integer # of pixels, that only speed up the algorithm. Sure, shaky hands needs a lot more images as it is hard to control shaky hands.

Sometimes, things get called different names, but the underlying method is the same. This is why I think these manufacturers are creating confusion.

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

Or let me put it this way: if I were given such task -- get a better image from multiple ones, regardless shifting a pixel or half, or from shaky hands, the approach is the same -- align them and do some calculation, be it removing Bayer limit, improving resolution, improving quality, all the same, it is about aligning and extracting more detailed info from multiple images.

In fact, I think I might just write a program like this :D for my own amusement.

rjlittlefield
Site Admin
Posts: 23562
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

the key is all about subpixel alignment of images and some trivial manipulation
This is sounding more than vaguely similar to https://petapixel.com/2015/02/21/a-prac ... photoshop/ .

We discussed this a couple of years ago, at http://www.photomacrography.net/forum/v ... 967#183967 and in the surrounding thread, and in the followup thread at http://www.photomacrography.net/forum/v ... hp?t=29822 .

--Rik

Lou Jost
Posts: 5945
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Post by Lou Jost »

I felt like the Sony, Pentax, and Olympus literature is fairly clear. Sony and Pentax are taking four shots per shifted image using integer pixel shifts to measure each color at each "point" on the image. So they don't have to interpolate, and this can improve resolution and color accuracy. This method can't make an image with more megapixels than the sensor has, and Sony and Pentax don't claim this; the Sony shifted-pixel files are the same size in megapixels as the unshifted ones. That's very clear. Olympus (with 8 shots per shifted image) is very different and is claiming (and achieving) an actual increase in megapixels, as well as measuring R,G, and B color info at all points.

Edited after reading mkzz's last entry: Sure, all these approaches are improving the picture quality by taking multiple images. However, you can't increase image resolution above the sensor resolution unless you have non-integer shifts, so that is a fairly fundamental difference.

Edit after reading Rik's note: Yes, I remember that "super-resolution" thread very well, and did some program writing myself with that. The importance of non-integer shifting quickly becomes clear when you try it.

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

rjlittlefield wrote:
the key is all about subpixel alignment of images and some trivial manipulation
This is sounding more than vaguely similar to https://petapixel.com/2015/02/21/a-prac ... photoshop/ .

We discussed this a couple of years ago, at http://www.photomacrography.net/forum/v ... 967#183967 and in the surrounding thread, and in the followup thread at http://www.photomacrography.net/forum/v ... hp?t=29822 .

--Rik
that is a lot to read, but I will read them

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

Lou Jost wrote:I felt like the Sony, Pentax, and Olympus literature is fairly clear. Sony and Pentax are taking four shots per shifted image using integer pixel shifts to measure each color at each "point" on the image. So they don't have to interpolate, and this can improve resolution and color accuracy. This method can't make an image with more megapixels than the sensor has, and Sony and Pentax don't claim this; the Sony shifted-pixel files are the same size in megapixels as the unshifted ones. That's very clear. Olympus (with 8 shots per shifted image) is very different and is claiming (and achieving) an actual increase in megapixels, as well as measuring R,G, and B color info at all points.

Edited after reading mkzz's last entry: Sure, all these approaches are improving the picture quality by taking multiple images. However, you can't increase image resolution above the sensor resolution unless you have non-integer shifts, so that is a fairly fundamental difference.

Edit after reading Rik's note: Yes, I remember that "super-resolution" thread very well, and did some program writing myself with that. The importance of non-integer shifting quickly becomes clear when you try it.
To create a robust solution, to combat camera shakes, subject motion, etc during images acquisition, it is better not to treat images as shifting by whole pixels, rather to use it as bounding constrain on estimating shifts when aligning images, thus obtaining subpixel alignment and get better results.

At least that is what I would do.

Lou Jost
Posts: 5945
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Post by Lou Jost »

If the goal is super-resolution (resolving details which cannot be resolved at the nominal sensor resolution, methods based on randomness require many more shots than methods based on known non-integer pixel shifts. The time required to make the sequence is often the most important limiting factor on the usability of this technique, because subjects tend to move sooner or later. So precise methods can actually be more robust than random methods.

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

Rik wrote in another thread:
However, the technique of superresolution by random shifting is a different matter entirely -- not nearly as predictable or reliable, and remarkably easy to misinterpret.
I am not very sure about this, maybe it depends on application, and the word "super resolution", to me, implies not just higher pixel count, but final improved image quality.

From what I did and for that specific application, shaky method worked remarkably well and it was in real time video (about 8fps during test).

Suppose, hypothetically, we have an infrared imaging system on a "shaky" platform (say a moving vehicle), the nature of IR at night make it very noisy, but SR algorithm can reduce that and provide far better, stabilized video

I did read that thread back then, now it has been 10 years since 2008, so I guess I can participate a little on this topic.
Last edited by mjkzz on Thu May 10, 2018 12:27 am, edited 1 time in total.

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

Lou Jost wrote:If the goal is super-resolution (resolving details which cannot be resolved at the nominal sensor resolution, methods based on randomness require many more shots than methods based on known non-integer pixel shifts. The time required to make the sequence is often the most important limiting factor on the usability of this technique, because subjects tend to move sooner or later. So precise methods can actually be more robust than random methods.
OK, let me ask you this: with "precise" pixel shifts implemented by these camera manufacturers, only 4 images are acquired, while for shaky hands, usually more than 10 images are acquired. Say we are in low light environment, which method do you think will produce final image in terms of noise?

Lou Jost
Posts: 5945
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Post by Lou Jost »

We need to separate three different things going on here. Noise reduction is one aspect, reducing or eliminating the bad effects of the Bayer filter is another, and producing sub-pixel resolution is a third thing.

Lots of random shots are a great way to reduce noise and to get more complete RGB data for the image.

Four shots with precision integer pixel shifts are a very efficient way to get complete RGB data for the image, but they won't reduce noise as much as your ten random shots.

The Olympus algorithm makes eight shots precisely shifted by half a pixel. This gives complete RGB data, reduces noise by about the same amount as your ten shots, and produces significant sub-pixel resolution. I do not know how many shots you would have to take at random to achieve the same sub-pixel resolution, but it would surely have to be many more than eight.

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

Lou Jost wrote:We need to separate three different things going on here. Noise reduction is one aspect, reducing or eliminating the bad effects of the Bayer filter is another, and producing sub-pixel resolution is a third thing.

Lots of random shots are a great way to reduce noise and to get more complete RGB data for the image.

Four shots with precision integer pixel shifts are a very efficient way to get complete RGB data for the image, but they won't reduce noise as much as your ten random shots.
I think we are viewing things differently. From my point of view, if I were asked to perform these tasks, I would approach this using the same algorithm.
The Olympus algorithm makes eight shots precisely shifted by half a pixel. This gives complete RGB data, reduces noise by about the same amount as your ten shots, and produces significant sub-pixel resolution. I do not know how many shots you would have to take at random to achieve the same sub-pixel resolution, but it would surely have to be many more than eight.
ah, this is perfect example. I can take a camera locked on a tripod and take, say, 16 images and possibly produce a better image (noise level) than Olympus's 8, why? Noise reduction really does not need pixel shifting, however, it does depends on image alignment which is part of SR algorithm. You need to align them for a robust implementation. Our recent discussion on vibration caused by shutter is an excellent example.

Post Reply Previous topicNext topic