1:20 to 1:5 (0.05x to 0.2x)

Have questions about the equipment used for macro- or micro- photography? Post those questions in this forum.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

Lou Jost
Posts: 5948
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Re: 1:20 to 1:5 (0.05x to 0.2x)

Post by Lou Jost »

Don't despair, there are always unexpected bargains out there. The 1980s photolithography lenses, which originally cost hundreds of thousands of dollars according to the graph, are available on eBay now for a few hundred dollars...

blekenbleu
Posts: 146
Joined: Sat May 10, 2008 5:37 pm
Location: U.S.
Contact:

Re: 1:20 to 1:5 (0.05x to 0.2x)

Post by blekenbleu »

patta wrote:
Wed Jan 25, 2023 7:10 am
--> can't you go monochromatic?? No more CA! Either get illumination with blue LEDs;
I tried blue LEDs, but green gave better results.
  • with a standard Bayer filter, only 1/4 of sensors have decent signal/noise in blue illumination.
  • Perhaps because my optics are old and dirty, blue seems to scatter noticeably
Metaphot, Optiphot 1, 66; AO 10, 120, and EPIStar 2571
https://blekenbleu.github.io/microscope

chris_ma
Posts: 570
Joined: Fri Mar 22, 2019 2:23 pm
Location: Germany

Re: 1:20 to 1:5 (0.05x to 0.2x)

Post by chris_ma »

In photoshop you can the resampling filter in the image size dialog box.
I usually choose the normal bicubic filter and add a little sharpening at the end.
You want to have a double filter hit from downscaling and upscaling to see at which percentage you start to loose resolution.

jvanhuys wrote:
Thu Jan 26, 2023 12:46 am
chris_ma wrote:
Thu Jan 26, 2023 12:17 am
jvanhuys wrote:
Wed Jan 25, 2023 5:21 pm
Could you elaborate at which point the oversampling cancels out the gain in sharpness? Like, at which point does it average out? Is there a rule or a formula to follow? I can of course Google this, but I always prefer speaking to an informed person first.
that depends on the resolution of the full optical imaging system.

let's say as an argument you're using a 40MP camera, but use a lens that can only resolve 10MP.

in this case you need at least 4 captures (more with overlap) for a 160MP image (containing 40MP of real resolution) and then downsample to 25%. you'll also want to account for the debayer resolution loss but in this example this is so small that it will be a vanishing factor.

it's simple to do a test:
capture a sample image with your setup, downsample by a factor of X with a lanczos filter, break concatenation of the scaling, upscale by a factor of 1/X.
see at which point the result still looks identical to the original image (ignoring debayer artefacts and noise)
Hey! How do access the filtering algorithm in Photoshop? I can only access it in Nuke, which is totally overkill for still images. Also Nuke is procedural, not exactly fun to work with layers. I've attached the filtering algorithms available to me in Nuke. Interesting that you mentioned Lanczos, as it's generally what we use in high-end VFX, Lanczos6 to be specific.

Why would you break concatenation though? That incurs a filtering hit... that is not good. The point of concatenation is to give a net transform with a single filter hit instead of multiple ones. Or are you saying you need to break concatenation to enable the effect of oversampling? If so, how would that work?
chris

rjlittlefield
Site Admin
Posts: 23564
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Re: 1:20 to 1:5 (0.05x to 0.2x)

Post by rjlittlefield »

jvanhuys wrote:
Wed Jan 25, 2023 5:21 pm
Could you elaborate at which point the oversampling cancels out the gain in sharpness? Like, at which point does it average out? Is there a rule or a formula to follow? I can of course Google this, but I always prefer speaking to an informed person first.
Sorry, I do not know know any rule or formula for this.

The one case that I have measured carefully was a long time ago, and the question I was addressing was slightly different. I started off wanting to understand how the rendering of fine detail degraded as it approached pixel size, then got interested in looking at how well a particular camera's digital sensor really captured a high resolution optical image. The answer to the first question was complicated. The answer to the second question turned out to be that compared to a simulated perfect sensor, the camera saw about sqrt(2) per axis less resolution than perfect. That is, on a USAF 1951 resolution chart, the simulated perfect sensor resolved about 3 more elements than the actual sensor did. This is all discussed near the beginning of the 3-page thread at viewtopic.php?t=2439 , "On the resolution and sharpness of digital images..."

I find myself often repeating one snippet from that discussion:
In order for our digital image to "look sharp", we have to shoot it or render it at a resolution that virtually guarantees some of the detail in the optical image will be lost. If you see some tiny hairs just barely separated at one place in the digital image, it's a safe bet that there are quite similar tiny hairs at other places that did not get separated, just because they happened to line up differently with the pixels.

Conversely, in order to guarantee that all the detail in the optical image gets captured in the digital image, we have to shoot and render at a resolution that completely guarantees the digital image won't look sharp.

So, there's "sharp" and there's "detailed" -- pick one or the other 'cuz you can't have both. What a bummer!
That snippet was written from a standpoint of pixel-peeping, where the viewer can clearly see one pixel versus its neighbor. If we change the situation to be say "2-square-meter prints examined by eye", then of course you can have arbitrarily sharp AND detailed prints, at only the cost of using lots of tiny pixels (about 1.1 gigapixels, at 600 pixels per inch = 24 pixels/mm.)

Everything I've seen suggests that in practice the advantage of oversampling tends to increase smoothly with the amount of oversampling you do.

I don't know any way to determine an optimum point, except by running some experiments and evaluating the results in light of your own available resources and criteria for image quality. Tradeoffs abound.

By the way, how many of these things do you need to scan? There's a big difference between heroic effort to make one or two "best possible" scans for personal interest, and doing it day after day to make money.

--Rik

Lou Jost
Posts: 5948
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Re: 1:20 to 1:5 (0.05x to 0.2x)

Post by Lou Jost »

In my experience there is no good reason to avoid over-sampling; certainly no theoretical reason. There is only the practical issue of storage space. I've never understood why people worry about this as if over-sampling is something bad. It's not. For best image quality you should over-sample as much as is practical given your storage capacity. Along with that, we should go beyond the practice of judging an image at "100%" enlargement, which give an false premium to sensors with low pixel densities. We should be measuring our zoom ratio as a fraction of the image width. This would be a much more useful measure; it would tell you how big your print could be, for example.

jvanhuys
Posts: 49
Joined: Sun Jan 22, 2023 9:24 pm
Location: Seoul, South Korea

Re: 1:20 to 1:5 (0.05x to 0.2x)

Post by jvanhuys »

Lou Jost wrote:
Thu Jan 26, 2023 6:15 am
Don't despair, there are always unexpected bargains out there. The 1980s photolithography lenses, which originally cost hundreds of thousands of dollars according to the graph, are available on eBay now for a few hundred dollars...
Hey Chris, if you don't mind... can you share a lens model/ name example? I'm not that knowledgeable on lens histories. Are you referring to stuff like Printing Nikkors or the old Apo repo Nikkors that created 600mm image circles and larger? If it's the latter, I had an old Repo Nikkor (Apo 360mm f/9) which I used quite a lot and it was rubbish. You could get away with calling it an APO in terms of CA, but it was almost offensive in how soft the image was. Maybe it has to be that way coz at 600mm you're bound to have massive circles of confusion.

I picked up the lens for next to nothing, then paid SK Grimes about $700 USD to convert the barrel to something mountable on a lens board. It still hurts thinking about that wasted money. I guess we learn most by the mistakes we make... or something.

I did find this lens on Youtube regarding your topic:
https://www.youtube.com/watch?v=O2GslOf_D6w
Last edited by jvanhuys on Thu Jan 26, 2023 6:39 pm, edited 1 time in total.

jvanhuys
Posts: 49
Joined: Sun Jan 22, 2023 9:24 pm
Location: Seoul, South Korea

Re: 1:20 to 1:5 (0.05x to 0.2x)

Post by jvanhuys »

Lou Jost wrote:
Thu Jan 26, 2023 6:07 pm
In my experience there is no good reason to avoid over-sampling; certainly no theoretical reason. There is only the practical issue of storage space. I've never understood why people worry about this as if over-sampling is something bad. It's not. For best image quality you should over-sample as much as is practical given your storage capacity. Along with that, we should go beyond the practice of judging an image at "100%" enlargement, which give an false premium to sensors with low pixel densities. We should be measuring our zoom ratio as a fraction of the image width. This would be a much more useful measure; it would tell you how big your print could be, for example.
I need to study and internalize your last sentence... I feel there's knowledge to be gained from it...

jvanhuys
Posts: 49
Joined: Sun Jan 22, 2023 9:24 pm
Location: Seoul, South Korea

Re: 1:20 to 1:5 (0.05x to 0.2x)

Post by jvanhuys »

rjlittlefield wrote:
Thu Jan 26, 2023 5:39 pm
jvanhuys wrote:
Wed Jan 25, 2023 5:21 pm
Could you elaborate at which point the oversampling cancels out the gain in sharpness? Like, at which point does it average out? Is there a rule or a formula to follow? I can of course Google this, but I always prefer speaking to an informed person first.
Sorry, I do not know know any rule or formula for this.

The one case that I have measured carefully was a long time ago, and the question I was addressing was slightly different. I started off wanting to understand how the rendering of fine detail degraded as it approached pixel size, then got interested in looking at how well a particular camera's digital sensor really captured a high resolution optical image. The answer to the first question was complicated. The answer to the second question turned out to be that compared to a simulated perfect sensor, the camera saw about sqrt(2) per axis less resolution than perfect. That is, on a USAF 1951 resolution chart, the simulated perfect sensor resolved about 3 more elements than the actual sensor did. This is all discussed near the beginning of the 3-page thread at viewtopic.php?t=2439 , "On the resolution and sharpness of digital images..."

I find myself often repeating one snippet from that discussion:
In order for our digital image to "look sharp", we have to shoot it or render it at a resolution that virtually guarantees some of the detail in the optical image will be lost. If you see some tiny hairs just barely separated at one place in the digital image, it's a safe bet that there are quite similar tiny hairs at other places that did not get separated, just because they happened to line up differently with the pixels.

Conversely, in order to guarantee that all the detail in the optical image gets captured in the digital image, we have to shoot and render at a resolution that completely guarantees the digital image won't look sharp.

So, there's "sharp" and there's "detailed" -- pick one or the other 'cuz you can't have both. What a bummer!
That snippet was written from a standpoint of pixel-peeping, where the viewer can clearly see one pixel versus its neighbor. If we change the situation to be say "2-square-meter prints examined by eye", then of course you can have arbitrarily sharp AND detailed prints, at only the cost of using lots of tiny pixels (about 1.1 gigapixels, at 600 pixels per inch = 24 pixels/mm.)

Everything I've seen suggests that in practice the advantage of oversampling tends to increase smoothly with the amount of oversampling you do.

I don't know any way to determine an optimum point, except by running some experiments and evaluating the results in light of your own available resources and criteria for image quality. Tradeoffs abound.

By the way, how many of these things do you need to scan? There's a big difference between heroic effort to make one or two "best possible" scans for personal interest, and doing it day after day to make money.

--Rik
Hi Rik,

Happy Friday and thanks for this. I'd just like to comment on a few of your observations which match what I see on screen versus a print. When I stitch using my 180mm Symmar-S, and thus oversample, the images look softer on screen than they do for shots taken with my Micro Nikkor 105mm, which is definitely a sharper lens. Regardless of whether I pixel-shift or not (by the way from my testing there is 0% resolution gain from 16-image shift versus 4-image shift.)

This is where my tests reaffirm what you mention. On screen, the the Micro-Nikkor 105mm is screen-tearing in how sharp it is at 100%, but the 'softer', but oversampled (stitched) Symmar-S image prints significantly sharper at 17x22".

Additionally I'd like to point out, I'm unable to add any sharpening when printing that large as it just creates artefacts and tiny halos that are almost invisible on screen, but really can't be avoided on the print. It's almost like the print is the final word in what constitutes a sharp and artefact-free image. Even something as subtle as a 3 pixel high-pass filter over that massive image, just introduces what I would call digital grain or something around high-contrast areas... it's bizarre. Again, the artefacts are almost invisible on screen.

I try and do a painting every week/ 2nd week and then print what I've documented and file accordingly. I find it quite rewarding... it's almost like I'm creating the paintings/ sculptures for the camera nowadays. There's a performance aspect to it.

jvanhuys
Posts: 49
Joined: Sun Jan 22, 2023 9:24 pm
Location: Seoul, South Korea

Re: 1:20 to 1:5 (0.05x to 0.2x)

Post by jvanhuys »

Hi everyone,

In order for this thread to move closer to a resolution I've decided to make a little announcement.

I didn't hear back from Shcneider regarding lens quote, they probably didn't want to scare me. I ended up ordering a Super Symmar HM 120mm f/5.6 (late serial number) from Japan. I'm hoping this is 'the one'.

My decision was based on:
  • Schneider didn't get back to me regarding some of their newer lenses which is maybe a good thing
  • The Makro Symmar 120mm HM f/5.6 having the best CA suppression I've ever seen in a lens, other than my Printing Nikkor. It's definitely not as sharp, but I'm quite strongly biased against CA
  • Despite the official docs not revealing magnification range, it seems the SS HM is optimized for infinity to 1:3, which would suggest Schneider designed it to tie in seamlessly with the Makro Symmar HM 120mm f/5.6. I'll put the link to the article below
Once the lens arrives in the next few weeks, I will report back here with my results, possibly a sample pic. If it's not up to scratch, I'll sell it, but I'm hoping it's the one that sort-of ticks 80% of my boxes, especially regarding large stitches in favour of over-sampling as opposed to a single-framed image from a Laowa or Sigma.

Have a lovely weekend, I won't be posting for a while. Here's the link where a guy discusses how the Makro Symmar HM and Super Symmar tie in as a lens-package covering the most used magnifications:
https://www.largeformatphotography.info ... r-Close-Up
Last edited by jvanhuys on Fri Jan 27, 2023 6:04 am, edited 2 times in total.

Lou Jost
Posts: 5948
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Re: 1:20 to 1:5 (0.05x to 0.2x)

Post by Lou Jost »

Hey Chris, if you don't mind... can you share a lens model/ name example?
I think you mean me? Yes, the one you found on YouTube is an example. Zeiss S-Planars are the most common ones on eBay but there are others. They are among the best lenses for their magnification, with insanely high resolution, but they are corrected for only one color of light.
I had an old Repo Nikkor (Apo 360mm f/9) which I used quite a lot and it was rubbish.
Nikon's Repro-Nikkors are a different class than the Apo-Nikkors. Nowadays the Repro-Nikkors (such as the 85mm f/1.0) are much more valuable than the Apo-Nikkors, which are cheap on eBay. But as you noticed, they are really apochromatic. They used to cost as much as a small car. They can be pretty good when considering their large image circles. Were you usijg them on film or on a sensor?

The Apo-Symmar 120 comes in several different varieties. Their designed magnifications are written on their rear lens flange.

lothman
Posts: 959
Joined: Sat Feb 14, 2009 7:00 am
Location: Stuttgart/Germany

Re: 1:20 to 1:5 (0.05x to 0.2x)

Post by lothman »

jvanhuys wrote:
Thu Jan 26, 2023 6:40 pm
Happy Friday and thanks for this. I'd just like to comment on a few of your observations which match what I see on screen versus a print. When I stitch using my 180mm Symmar-S, and thus oversample, the images look softer on screen than they do for shots taken with my Micro Nikkor 105mm, which is definitely a sharper lens. Regardless of whether I pixel-shift or not (by the way from my testing there is 0% resolution gain from 16-image shift versus 4-image shift.)
I found that as long as the lens can outresolve the 60MP sensor, then I can achieve better results with 16 shot pixelshift. For example the Laowa 2,5-5x at 2,5x shows an improvement with pixelshift, but no longer at 5x, shown here. So probably you had a shaky setup or the Micro Nikkor was already at the limit of a 60 MP sensor.

JKT
Posts: 420
Joined: Fri Oct 28, 2011 9:29 am
Location: Finland
Contact:

Re: 1:20 to 1:5 (0.05x to 0.2x)

Post by JKT »

rjlittlefield wrote:
Thu Jan 26, 2023 5:39 pm
Sorry, I do not know know any rule or formula for this.
Well maybe not exactly that, but I run into an approximation formula for combining the optical and sensor resolution.
Pic1.png
It is pretty far simplified and I don't know how much it should be believed. In order to give it some other other degrees of freedom I thought it could be modified as
Pic2.png
So that defines the sensor resolution as pixel pitch times a constant k and questions the use of square root by changing that into a parameter as well.

If that is deemed to have any promise, the next job would be to estimate the two parameters. Unfortunately that is beyond me. If I read the other thread correctly, there seems to be a number of potential values for at least the factor k. If this could be chosen and n defined even reasonably, it would allow some further development like...
Pic3.png
which would be a useful formula IMHO. It gives a reduction factor for estimating how much of the sensor resolution is left after it is combined with the optical resolution.

With it one could estimate that when you have optical resolution R and sensor with pixel width e, your image resolution is similar to what you would get with sensor having Y% of horizontal pixels of your original sensor.

IF it was possible to get this to work at least reasonably, it would give a simple function to estimate the potential resolution with different front optics and sensors with a simple number. Sure it would be an estimate, but would it be accurate enough to be of use or would it just make false predictions? That is a question I sure can't answer. In any case, it assumes same resolution over entire sensor. It would be simple to use only part of the sensor area by using pixels in that part as base pixel count, but that's the only such correction I can think out.

What do you think? Does that formula have any potential?

rjlittlefield
Site Admin
Posts: 23564
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Re: 1:20 to 1:5 (0.05x to 0.2x)

Post by rjlittlefield »

JKT wrote:
Fri Jan 27, 2023 1:09 pm
rjlittlefield wrote:
Thu Jan 26, 2023 5:39 pm
Sorry, I do not know know any rule or formula for this.
<assorted math>

What do you think? Does that formula have any potential?
Short answer: no, or at best not yet.

Longer explanation follows...

When you mention "other thread", I think you're referring to viewtopic.php?p=148498#p148498 , in which I introduced a similar formula for estimating MTF 50 at varying amounts of defocus.

The situation there is very different. For the MTF 50 versus defocus problem, there already exists an accurate physics-based method to do the calculation. The only problem is that the physics-based method is computationally awkward because it involves numeric integration of some complicated functions. To avoid that problem, I used a curve-fitting process to find an alternative function that was simpler and more friendly but still reproduced the physics-based calculation quite accurately.

In contrast, for the current thread there is no underlying accurate model to be matched, or at least none that I know of.

If such a model did exist, either theoretical or experimental, then it could make sense to haul out a toolkit of approximation methods and try to fit the possibly complex model with a simpler formula. But with no model to be matched, hauling out generic formulas is at best premature.

--Rik

JKT
Posts: 420
Joined: Fri Oct 28, 2011 9:29 am
Location: Finland
Contact:

Re: 1:20 to 1:5 (0.05x to 0.2x)

Post by JKT »

rjlittlefield wrote:
Fri Jan 27, 2023 2:05 pm
JKT wrote:
Fri Jan 27, 2023 1:09 pm

<assorted math>

What do you think? Does that formula have any potential?
Short answer: no, or at best not yet.

Longer explanation follows...

When you mention "other thread", I think you're referring to viewtopic.php?p=148498#p148498 , in which I introduced a similar formula for estimating MTF 50 at varying amounts of defocus.
Actually I meant this one: viewtopic.php?t=2439 - it had quite a lot relevant tests and comments, but it also does highlight that there really is no single answer to the question.
In contrast, for the current thread there is no underlying accurate model to be matched, or at least none that I know of.

If such a model did exist, either theoretical or experimental, then it could make sense to haul out a toolkit of approximation methods and try to fit the possibly complex model with a simpler formula. But with no model to be matched, hauling out generic formulas is at best premature.
Theoretical - no. Experimental would start with having a way to generate a set of test results. The data would then fit sufficiently or not. I don't see the need for previous experimental model. You define the required accuracy and if the test results fit that, you have a candidate to test with separate datasets. If those fit as well, the model is acceptable for the defined purpose.

RobertOToole
Posts: 2627
Joined: Thu Jan 17, 2013 9:34 pm
Location: United States
Contact:

Re: 1:20 to 1:5 (0.05x to 0.2x)

Post by RobertOToole »

jvanhuys wrote:
Thu Jan 26, 2023 7:21 pm
Hi everyone,

In order for this thread to move closer to a resolution I've decided to make a little announcement.

I didn't hear back from Shcneider regarding lens quote, they probably didn't want to scare me. I ended up ordering a Super Symmar HM 120mm f/5.6 (late serial number) from Japan. I'm hoping this is 'the one'............
Years ago I downloaded all the LF lens data PDFs from the old Schneider Optics USA site before they took it offline. As far as I remember all the LF non-macro Symmars, the APO-Symmar, Super-Symmar etc were corrected for distortion and MTF at infinity. As they went closer distortion went up and sharpness went down. MTFs were for infinity to 1:10. Also center sharpness went down and corner sharpness improved when stopped down to f/22. I tried to find the Super Symmar 120 lens data but don't seem to have it on my drive!

Large format enlarging lenses should be better suited for your use since they are usually corrected for best MTF and low distortion at 1/5-1/20.

Best,

Robert

Post Reply Previous topicNext topic