Immense enlarger lens test database

Have questions about the equipment used for macro- or micro- photography? Post those questions in this forum.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

Lou Jost
Posts: 5987
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Immense enlarger lens test database

Post by Lou Jost »

As many of us use enlarger lenses as camera lenses, people might be interested in this website which ranks an immense list of common and unusual enlarger lenses, checking their performance at low-m distances and at infinity. It does not check them at real macro distances.

https://deltalenses.com/index.php/hall/

I found the site because I was looking for an enlarger lens that does well at infinity, to use for UV astrophotography. Some enlarger lenses are among the very few affordable lenses that, by design, have no focus shift between UV and visible light and have reasonable UV transmission.

bbobby
Posts: 82
Joined: Sat Jan 15, 2022 12:40 pm
Location: Indianapolis, IN

Re: Immense enlarger lens test database

Post by bbobby »

It "rank" the lenses, but I cannot see any info from where the score is coming from... nor what test exactly are performed to get this score...

RobertOToole
Posts: 2627
Joined: Thu Jan 17, 2013 9:34 pm
Location: United States
Contact:

Re: Immense enlarger lens test database

Post by RobertOToole »

Lou Jost wrote:
Mon Nov 07, 2022 7:12 am
As many of us use enlarger lenses as camera lenses, people might be interested in this website which ranks an immense list of common and unusual enlarger lenses, checking their performance at low-m distances and at infinity. It does not check them at real macro distances.

https://deltalenses.com/index.php/hall/

I found the site because I was looking for an enlarger lens that does well at infinity, to use for UV astrophotography. Some enlarger lenses are among the very few affordable lenses that, by design, have no focus shift between UV and visible light and have reasonable UV transmission.

Its been mentioned here before on threads Lou. A contributor on this forum is one of the contributors to that site. Member: Simplejoy.

Best,

Robert

Lou Jost
Posts: 5987
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Re: Immense enlarger lens test database

Post by Lou Jost »

It "rank" the lenses, but I cannot see any info from where the score is coming from... nor what test exactly are performed to get this score...
I also have no idea, and for that reason (and the absence of test photos) it would be good to treat the information with some skepticism.

ray_parkhurst
Posts: 3432
Joined: Sat Nov 20, 2010 10:40 am
Location: Santa Clara, CA, USA
Contact:

Re: Immense enlarger lens test database

Post by ray_parkhurst »

I had similar concerns that I expressed over on the MFLenses forum. I'm still not sure what actual testing or criteria are used, but since I don't use the lenses in the "portrait" mag range I was not all that concerned. Anyway, here's the discussion thread:

http://forum.mflenses.com/gold-standard ... 83773.html

16-9
Posts: 12
Joined: Mon Jun 20, 2022 1:12 am
Location: London

Re: Immense enlarger lens test database

Post by 16-9 »

The overlap between Delta and activity in this forum is relatively thin, but as someone interested in telling the wider story of 'alt-lenses', I'm always happy to see them put to use. In this thread, we have advocates documenting them for macro/micro use so well that I don't have to!

Delta's grading system only applies to 'sharpness', and was intended to provide a simple but credible way of comparing industrial, cine, projector and enlarger lenses with common taking lenses at 'normal' and 'close-up' ranges. There's a significant gulf between lenses used for high magnification and the rest of the photographic world. Delta's living out there, not so much in here.

I'm concerned to gather information about enlarger lenses in particular, and by extension similar optics, because it's tangibly vanishing. Such 'forgotten' lenses have perennial creative potential and I wanted to get involved in preserving their legacy. Along the way, some of what we're doing began to reflect on the hierarchy and reputation of enlarger lenses, and I've tried to position the Delta survey as part of a long-term discussion from a fresh perspective. It's already cataloguing a few things that haven't ever been catalogued, and we've uncovered and complete several previously unpublished chronologies to help buyers and collectors. Presently we only have about 2500 images in the library, but more are being added as fast as I can: it's not much more than a solo effort, with help from one or two like simplejoy, who is equally enthusiastic about these lenses.

I've updated a fairly well hidden 'About Delta' page on the site that I hope answers questions about how, what, and why. If not, just shout. And if anyone would like to get involved, I need guest editors and collaborators for articles and YouTube videos . . .
https://deltalenses.com/index.php/2022/ ... use-delta/

Lou Jost
Posts: 5987
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Re: Immense enlarger lens test database

Post by Lou Jost »

Thanks, I had seen that page at the outset when I searched for a description of the testing method. But even now, in updated form, I still don't see any information about precisely how sharpness was measured. In fact, the great amount of detail in that page about everything else makes the absence of actual testing details even more puzzling.

I do agree that most people use and test enlarger lenses for short distances, and there is very little information about how they behave at longer distances or at infinity, so any real information is welcome.

In the MFL forum there were some comments that the rankings in this database seemed the opposite of other reviews. But those reviews were probably done at the very close distances that are most commonly used for enlarger lenses. There is an optical reason to expect that the ranking of a lens at very close distances might well be the opposite of the ranking at longer distances, or at infinity. A lens that performs particularly well at very close distances might do so because of a higher degree of optical optimization for those distances, and this optimization will cause it to do poorly at longer distances or at infinity. And vice-versa.

16-9
Posts: 12
Joined: Mon Jun 20, 2022 1:12 am
Location: London

Re: Immense enlarger lens test database

Post by 16-9 »

Thanks for your feedback, Lou. Perhaps it got lost in what became a long article. I'll try to precis:

All lenses shoot standard targets at apertures up to f11, and two distances: 50-80cm and 5-10m. Each capture is manually refocused and bracketed for Zone A and C.
Tests take place in batches that include new candidates and previously tested lenses for comparison.
Marks out of 10 are awarded for 'sharpness' – although a better term would be 'transparency' – relative to the established ladder: now 200 graded lenses. Eg, if a new lens is judged to render between one previously awarded 7.5 in the frame centre and another awarded 7.1, it is assigned a mark of 7.3. Images are overlaid or juxtaposed at 200% view and ranked 'blind' – in the sense that I don't know which image came from which lens when making the comparison.
Continual comparison has stabilised what started out as a sketchy hierarchy based on comparison to the benchmark of Sigma Art primes. A maximum mark of 9.7 was judged, somewhat arbitrarily, to be generated by the Sigma 105/1.4 at f2.8 frame centre.
The spreadsheet of individual aperture grades generates league tables based on averaged results, given as a percentage.
If the percentage exceeds 90%, it is classified 'Gold'; 80-90% Silver and 70-80% Bronze.

Sharpness/transparency is defined for this purpose as absence of local aberrations and ability to resolve fine detail within the limit of the test sensor. Where a lens appears to resolve the sensor fully, results are adjusted on the basis of a more demanding sensor. I appreciate the question 'what does fully resolve mean?' deserves further discussion, and I unpack it a little more on the website.

The sharpness grade is not a deep dive into the full range of properties of each lens. We're gradually doing that for lenses of interest. The objective is to provide a wide perspective on the basic competence of a very diverse range of 'alt-lenses'.

As I explain on Delta, there's an unavoidable margin of error in this exercise that makes mid-table results less reliable than the top end. However, if a lens performs at 'Gold-level', I'm very confident it deserves the accolade. Methodology errors create sub-standard results: I always have to be open to the possibility that a better sample, or something that compromises a test on the day, might result in a lens being undervalued. However, this method never overvalues a lens: if it performs at the highest level, there's only one explanation: it is of the highest quality.
Last edited by 16-9 on Thu Nov 10, 2022 9:31 am, edited 1 time in total.

16-9
Posts: 12
Joined: Mon Jun 20, 2022 1:12 am
Location: London

Re: Immense enlarger lens test database

Post by 16-9 »

Lou Jost wrote:
Tue Nov 08, 2022 8:29 pm
In the MFL forum there were some comments that the rankings in this database seemed the opposite of other reviews.
There was a misread of results there: I itemised a number of Gold-awarded lenses in sharpness order. Someone saw a lens 'way down the list' they expected to perform better, and a lens higher up the chart they didn't rate. However, in reality the whole list scored ±1.5% and were very similar: you could reshuffle it in almost any order and I would point to sample variation being as big a factor as differences between designs. However, that elite club outperformed all other lenses so far tested, at the working distances evaluated (the logarithmic nature of optimisation for magnification isn't as widely realised as it could be). Other lenses may be promoted to Gold standard – I may acquire better samples, for instance – but no Gold-awarded lens can be demoted.

If I recall, the controversial Meogon 80/2.8 caused the trouble. For decades, this lens oscillated from top and bottom of recommended lists, and it's taken me a while to understand it. It's an object lesson in why people disagree, but here's the straight story: like the Focotars, the Meogon is strictly optimised for a tight magnification range. If you push it into macro territory, it falls apart. If you take it to infinity, it falls apart. If you shoot it wide open, it's terrible. It's surprisingly average at f8. Recently resurfaced MTF data shows a steep decline in Zone D that probably translated to soft corners with medium format enlargements. However, at f5.6, at a working distance from around 30cm to 100cm, the Meogon 80/2.8 is sharper in Zone A, and at least as sharp in Zone C, than any enlarger lens I've shot with. These facts explain all the press that lens ever received. Depending on how you look at/use it, it's either a dud or a legend.

bbobby
Posts: 82
Joined: Sat Jan 15, 2022 12:40 pm
Location: Indianapolis, IN

Re: Immense enlarger lens test database

Post by bbobby »

First let me mentioned that to have a performance database for older (and not only) lenses is great idea... which I admire... and good luck with it... BUT...
16-9 wrote:
Wed Nov 09, 2022 6:45 am
All lenses shoot standard targets at apertures up to f11, and two distances: 50-80cm and 5-10m. Each capture is manually refocused and bracketed for Zone A and C.
Tests take place in batches that include new candidates and previously tested lenses for comparison.
What is "standard targets" ??
What you are describing is... lets say "vague"... Assuming f/2.8 lens you will have to do 5 images for 2.8-4-5.6-8-11, assuming 2 zones - 10 images, assuming only 1 bracketed shot + and - and again assuming 2 targets at different distance this means minimum of 60 images per lens, minimum 2 lenses makes it 120 images for only 1 comparison and this is a lot of work - then even more work will be to process these images and chose the best and make the actual comparison ... yet I cannot see any examples anywhere... on top of that the whole testing methodology is missing... How about variation between same lenses - 1 copy could be better than other (manufacturing process, quality control, user abuse, deterioration from age, etc.)... I know that from experience... 1 of the reasons why when I am testing a lens I try to have at least 2 copies (in many cases I got 4+ copies)... What about the magnification? A well corrected lens for 1:10 will shine if you can take a picture with it at this magnification, and with 50-80 cm wiggle distance that probably is possible at least in some instances...
Depending of the focal length of the lens, of course, for distance of 5-10 meters a target probably should be at least 2-3 meters wide... or many targets covering same area... uniform lighting could be a big issue here...
16-9 wrote:
Wed Nov 09, 2022 6:45 am
Images are overlaid or juxtaposed at 200% view and ranked 'blind' – in the sense that I don't know which image came from which lens when making the comparison.
Where are these thousands of images - some of us will gladly take a look, if not the whole image at least the 100% crop used for comparison... assuming shooting RAW what is used for converting to something else? What correction are applied, if any, and are they always the same for any lens? Too many questions here...
16-9 wrote:
Wed Nov 09, 2022 6:45 am
Sharpness/transparency is defined for this purpose as absence of local aberrations and ability to resolve fine detail within the limit of the test sensor. Where a lens appears to resolve the sensor fully, results are adjusted on the basis of a more demanding sensor. I appreciate the question 'what does fully resolve mean?' deserves further discussion, and I unpack it a little more on the website.
How you define "absence" - complete absence, what the APO lenses are trying to achieve, less than 1 pixel, less than 2 pixels? What software is used for the demosaicing?
Which "sensor" and what camera? Which sensor is the benchmark so if the lens is resolving it fully you can go to the next "more demanding" sensor?

And another thing... I see a lot of images on your website of the lenses itself, for some of them credit is given, so should I assume you took all of the rest or they are not copywrite protected?

Lou Jost
Posts: 5987
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Re: Immense enlarger lens test database

Post by Lou Jost »

That's more helpful. So you have tested all the lenses yourself and ranked their output by eye relative to other similar lenses, always using the same target. Is that right?

I have some of the same questions as bbobby, but I am ok with some of the "slop" in the methodology.

Variation between lenses is important but most review sites don't have the manpower or funding to do more than one copy. It is great when a reviewer does test multiple copies, and those authors deserve much credit and their results deserve more respect. But it is better to have one data point than to have none.

I also think the test description answers the question about adjusting for magnification. The rankings are just based on the mentioned distances, and for that purpose, it doesn't matter what magnification the lens is optimized for. The evaluation is only for the set distances. So a PN105 will rank among the worst of the lenses, and that is correct, for the mentioned distances..

ray_parkhurst
Posts: 3432
Joined: Sat Nov 20, 2010 10:40 am
Location: Santa Clara, CA, USA
Contact:

Re: Immense enlarger lens test database

Post by ray_parkhurst »

IMO a limitation of the database is the lack of comparison/reference images, but given the long timeframe this method has been used, would be too much to ask. Perhaps comparison images could be shown on a couple examples for reference to illustrate the method?

Even in my meager testing methodology, with non-standard targets, I show all of the images I use for judgements as IMO it is important for independent verification, improving the confidence of those looking at my results.

16-9
Posts: 12
Joined: Mon Jun 20, 2022 1:12 am
Location: London

Re: Immense enlarger lens test database

Post by 16-9 »

Some of your mainly-fair questions are answered in the criticised-as-too-long article I was trying to cut down to the essentials! However . . .

What is "standard targets" ?? uniform lighting could be a big issue here...
– The best target is the one that discriminates desired results. A standard target for generating an MTF chart is necessarily different to the standard target required for assessing chromatic aberration, geometric distortion, etc. The objective of the Delta tests is not to post digitally-generated charts, but to ascertain perceived sharpness for general-purpose taking and I've found (for instance) USAF and Edmunds Optics targets unhelpful in this regard – whereas a scene with depth and fine-grained texture correlates better to real-world behaviour. For Delta, something extremely simple and repeatable was chosen: two identical bank notes fixed sagittal/meridional on a plywood board close-lit by a single bare LED studio light in the dark. The scene is high contrast, with fine black markings added, to aid focus peaking. The finest structures in the bank notes are a struggle for the benchmarks at optimal apertures to render, and there's practically endless high-frequency information in rakingly-lit plywood. Crucially, the target isn't perfectly flat, which a) flags up the exact location of the focal plane, and b) enables comparison of depth-rendition and 'plasticity', which plays a role in the overall sharpness/transparency grade.

What about the magnification? A well corrected lens for 1:10 will shine if you can take a picture with it at this magnification, and with 50-80 cm wiggle distance that probably is possible at least in some instances... Depending of the focal length of the lens, of course, for distance of 5-10 meters a target probably should be at least 2-3 meters wide... or many targets covering same area...
– Magnification is a major factor in lens assessment. You macro chaps have this nailed but it escapes the notice of most photographers because it's logarithmically less important at longer working distances. I have to be much more careful focus bracketing the 50-80cm tests than the 10m tests, which just makes me glad you already have people working on tests at 1:1 and trickier and I can give all that a miss, for now. As I've said elsewhere, reputations don't translate: star performers at a given magnification are not guaranteed to be great outside it. All bets are off. Don't get your hopes up. I often stress the importance of understanding that Delta's grades apply only to the ranges specified – that's why I offer two entirely different sets of grades: one for a metric I consider 'standard short' (ie, art reproduction, food photography, cinematic close-ups) and one for 'long-portrait', which is practically the same, optically, as infinity: a truly pointless metric. No-one shoots anything at infinity. I would be very receptive to adding a third category for Macro grading, and if anyone wants to jump on board and help with that, I would welcome them with open arms. However, the rest of Delta is keeping me busy right now . . .
Given that the Delta tests rely on comparison, identical framing is critical, so lenses are tested in batches of similar focal length and the camera position still usually has to be tweaked for each lens.

Assuming f/2.8 lens you will have to do 5 images for 2.8-4-5.6-8-11, assuming 2 zones - 10 images, assuming only 1 bracketed shot + and - and again assuming 2 targets at different distance this means minimum of 60 images per lens, minimum 2 lenses makes it 120 images for only 1 comparison and this is a lot of work - then even more work will be to process these images and chose the best and make the actual comparison ... yet I cannot see any examples anywhere... Where are these thousands of images - some of us will gladly take a look, if not the whole image at least the 100% crop used for comparison...
– I think you've answered your own question. Your assumptions are correct. I have terabytes of captures, stored as TIFFs. There isn't time or space, and there would be no purpose, in publishing them. They're distilled down into a very simple, very compact, very easily communicated grading ladder, or 'Hall of Fame'. No-one wants to plough through hundreds of nearly-identical images of plywood boards and bank notes. I don't, and I did.
In simpler times, we were all very pleased with Bjørn Rørslett giving marks out of 5 for 187 Nikon-mount lenses without so much as a picture of a lens. I'm collating production dates, serial numbers, optical formulae, aperture type and sexy example images for lenses that haven't been in production for decades and still punters aren't happy with their free resource!

Assuming shooting RAW what is used for converting to something else? What correction are applied, if any, and are they always the same for any lens? What software is used for the demosaicing? Which "sensor" and what camera? Which sensor is the benchmark so if the lens is resolving it fully you can go to the next "more demanding" sensor?
– Standard body is Panasonic Lumix S1R (47.3MP) shooting RAW at ISO100 processed through DXO PhotoLab 5 with all corrections turned off > LZW TIFF. PhotoLab used at 200-350% magnification to assess RAW files to select optimal focus. Only the best captures are processed; the rest discarded. Critical re-tests made with Lumix G9 + shift adaptor: a very fussy camera that made me appreciate M43 optics instead of complaining about them.

How about variation between same lenses - 1 copy could be better than other (manufacturing process, quality control, user abuse, deterioration from age, etc.)... I know that from experience... 1 of the reasons why when I am testing a lens I try to have at least 2 copies (in many cases I got 4+ copies)...
– It's a problem. There are lenses in Delta that I own or have tried up to five copies: Schneider Componon-S 50/2.8, Fuji EX 50, 75 and 90, EL-Nikkor 50, 80, 105. Many others are 2-3 (ie, Minolta CE 50, 80 and Rokkor 75, Rodagon 50/2.8, 80/4, Taylor Hobson Ental 2inch). Sometimes you try five and they're so similar you wonder why you bother. More often, when you do that, there's one outlier: sub- the standard set by the others. And then you wonder whether the single rare sample of that other lens you obtained was representative. The logistics of testing 200 lenses are not to be underestimated. I'm doing my best. More hands make light work.
Although the problem initially seems intractable - after all, how many samples is truly enough to be sure? - and why aren't you asking this question of every lens review you've ever read? - comparative analysis is a powerful tool. The tell for bad data here would be a lens appearing too low in the ranking. As I've explained, the nature of the method guarantees the accuracy of lenses with very high marks, but if a lens we might expect to perform well doesn't, it raises a flag. We can apply other comparative filters here: for instance, with enlarger lenses (though not as an optical rule) acuity decreases with increasing focal length, so when we see a grading system that shows a descending order of 105, 80, and 50mm from the same manufacturer and similar optical formula, we know we're on the right track. If one seems to be underperforming, perhaps a new sample should be tested. Triplets aren't capable of better than what Delta calls 'Bronze' level sharpness, but a six-element lens that fails to reach Silver would be suspect. I even factor in strictly 'anecdotal' reports: if someone with a deep and wide experience of these lenses says to me: "I'm surprised you rank lens A so poorly: for me it was better than B, C and D" (where B, C and D are known quantities) the test may also need to revisited with a different sample. The presence of such expectation bias is why I evaluate results blind.

How you define "absence" - complete absence, what the APO lenses are trying to achieve, less than 1 pixel, less than 2 pixels?
– Absence = opposite of presence. As I said, chromatic aberration is less detrimental to sharpness than spherical aberration or other correction issues. The awarded grade is judged visually. If it says Apo it probably isn't but if it looks sharp, it is.

And another thing... I see a lot of images on your website of the lenses itself, for some of them credit is given, so should I assume you took all of the rest or they are not copywrite protected?
– Gathering and shooting product images is crucial but time-consuming. Some are used by permission of dealers; some from the manufacturer; some are shot here; some are contributed; some are taken from China where copyright isn't a thing, and some are lifted without permission from expired auction sales - as, for instance, Worthpoint does, with the understanding that if anyone wishes their image to be removed, or have a credit or link added, they should contact me straight away. Delta isn't selling anything and is free to use, so there's no conflict of interest or commercial loss or gain involved.
Last edited by 16-9 on Thu Nov 10, 2022 6:35 am, edited 1 time in total.

16-9
Posts: 12
Joined: Mon Jun 20, 2022 1:12 am
Location: London

Re: Immense enlarger lens test database

Post by 16-9 »

Lou Jost wrote:
Wed Nov 09, 2022 11:31 am
That's more helpful. So you have tested all the lenses yourself and ranked their output by eye relative to other similar lenses, always using the same target. Is that right?

I have some of the same questions as bbobby, but I am ok with some of the "slop" in the methodology.

Variation between lenses is important but most review sites don't have the manpower or funding to do more than one copy. It is great when a reviewer does test multiple copies, and those authors deserve much credit and their results deserve more respect. But it is better to have one data point than to have none.

I also think the test description answers the question about adjusting for magnification. The rankings are just based on the mentioned distances, and for that purpose, it doesn't matter what magnification the lens is optimized for. The evaluation is only for the set distances. So a PN105 will rank among the worst of the lenses, and that is correct, for the mentioned distances..
Lens testing is a sloppy business. Even when an individual method is sound, it aims at a particular outcome that's only one strand of a complex dataset. The inconsistency between test methods is sloppy. The inconsistency between the way manufacturers report their own data is sloppy. It's sloppy when when lens testers upgrade their test platform and make all the old results incompatible with the new ones. Lens manufacture is sloppy, creating sample variation.

The comparative method is very sloppy at first, but gets more accurate as it evolves. The requirement for identical framing means we're sometimes testing lenses at slightly different working distances, and that's a bit sloppy. It's sloppy only to sample two points in the image circle, which varies continuously in a radial pattern. It's extremely sloppy when reviewers equate Zone B long-side edges and Zone C corners! Where we only test one sample, that's sloppy, too. But, as I said the grading exists to provide an empirical common framework on which we can place with a fair degree of reliability hundreds of disparate lenses that otherwise have had no commonality or hierarchy.

Delta's grading is a one-man operation, so I have to define my aims tightly: excluding many factors some might be interested in, and focusing on the one objective as clearly as possible. Delta doesn't measure CA or geometric distortion, and only mentions it in the review when it's severe. Partly because (LOCA excepted) it's trivial to fix in post (resolution loss granted) and partly because enlarger lenses by default tend to be less problematic in those areas.

The PN105 is one of the my favourite examples of a lens that performs incredibly well for its intended purpose, and really terribly when asked to do anything else! My copy of that lens was a big inspiration to trek into the unknown and record the results in Delta.

Lou Jost
Posts: 5987
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Re: Immense enlarger lens test database

Post by Lou Jost »

Delta doesn't measure CA or geometric distortion, and only mentions it in the review when it's severe.
Uhhhh, the CA is a big thing to leave out, and one of the most important and common lens defects. It's not that trivial to correct well, and any such correction could reduce resolution.

Post Reply Previous topicNext topic