Tip: Making colours look just like you saw them by eye

A forum to ask questions, post setups, and generally discuss anything having to do with photomacrography and photomicroscopy.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

Beatsy
Posts: 2105
Joined: Fri Jul 05, 2013 3:10 am
Location: Malvern, UK

Tip: Making colours look just like you saw them by eye

Post by Beatsy »

This applies to any photographic subject but close-ups of insects tend to exhibit the problem a lot more. Even with correct WB and monitor colour calibration, ever notice how the colour of a dark carapace (for example) seems to get way over-saturated, almost neon-looking in the final output? The same happens often with internal parts of flowers at high mag. The colours are nothing like what you saw when you looked through your loupe or stereo microscope.

The cause is a perceptual issue. Your camera sensor has a linear response to colour at different levels of brightness, but your eye has a non-linear response. We perceive far less colour in darker parts of a scene than we do in the bright parts (rods vs cones). So when you see your final image on screen, the dark (coloured) areas of the subject can look over-saturated, often horribly so.

The solution is simple - desaturate shadow tones (only). In Photoshop and Affinity this involves creating an HSL adjustment layer with saturation turned way down. Crucially, a mask is applied to the adjustment layer so only shadow tones are affected. In Affinity, the mask is created using Select/Tonal Ranges/Shadows then create the HSL layer. It's a little more complex in Photoshop - the channels tab is probably the best route for selecting only shadow tones (look up "luminosity masking" for details). In fact, luminosity masking is probably the best method if you want fine control over which tones are affected. But the "select shadows" method in Affinity works very well in most cases.

Here's an example using the Sicus sp(?) I posted yesterday. First, the result from stacking and doing overall exposure adjustments...
Image
And now the "improved" version with shadows quite heavily desaturated...
Image
And finally, the mask that was applied to the HSL adjustment layer (in Affinity Photo). White parts get full adjustment, black parts get none, grey parts are adjusted proportionately. Note: this one is fairly bitonal, so not much grey in evidence. Luminosity masking tends to produce a much smoother transition between tones - and yields a "grayer" mask as a result.
Image
So that's it, my tip for the year, or two :) To me, the adjusted image looks waaay more faithful to what I saw by eye, but it is a subjective thing. What do you think?

JH
Posts: 1307
Joined: Sat Mar 09, 2013 9:46 am
Location: Vallentuna, Stockholm, Sweden
Contact:

Post by JH »

Thanks! Helps a lot.
Best regards
Jörgen Hellberg
Jörgen Hellberg, my webbsite www.hellberg.photo

NikonUser
Posts: 2693
Joined: Thu Sep 04, 2008 2:03 am
Location: southern New Brunswick, Canada

Post by NikonUser »

what do I think?
A need for clairvoyance, having no idea what the actual bug looks like how can one judge?
NU.
student of entomology
Quote – Holmes on ‘Entomology’
” I suppose you are an entomologist ? “
” Not quite so ambitious as that, sir. I should like to put my eyes on the individual entitled to that name.
No man can be truly called an entomologist,
sir; the subject is too vast for any single human intelligence to grasp.”
Oliver Wendell Holmes, Sr
The Poet at the Breakfast Table.

Nikon camera, lenses and objectives
Olympus microscope and objectives

Lou Jost
Posts: 5943
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Post by Lou Jost »

That's a very interesting observation and nice solution. I had never noticed this problem before, probably because I don't have a good microscope so I am mostly looking at my subjects through a digital monitor. I'll pay more attention to this from now on.

I did almost get a pair of nice microscopes that I had put together from ebay parts (UM-1 and Labophot or Optophot). But when I tried to bring them back to Ecuador last week, the airline (American Airlines) surprised me by telling me the luggage weight limits they had posted on their website (2 bags, up to 70 pounds each, paying overweight) were not correct, and I was only allowed to bring two 50 lb suitcases to Ecuador. Chaotic airport repacking/re-weighing ensued, and I had to leave 40 pounds of lenses and microscopes behind. Luckily a family member had stayed to see me off, so he could take the stuff and save it for me for next time.

Beatsy
Posts: 2105
Joined: Fri Jul 05, 2013 3:10 am
Location: Malvern, UK

Post by Beatsy »

NikonUser wrote:what do I think?
A need for clairvoyance, having no idea what the actual bug looks like how can one judge?
Not quite what I meant. You'll just have to take my word for it that the 2nd version is a *much* closer representation of the actual bug as seen by eye. But I'd judge the first to be unnaturally coloured even if I'd never seen the "actual bug" itself.

You could easily try it with one of your own images for an absolute comparison.

ChrisR
Site Admin
Posts: 8668
Joined: Sat Mar 14, 2009 3:58 am
Location: Near London, UK

Post by ChrisR »

I'd forgotten about " luminosity masking " by clicking in the Luminosity window, because I never found a use for it. :) . It only gives you PS's step-ideas of what you want, which are always wrong.
There's probably a dozen ways to do it in PS, as usual.

If you copy your image, use Threshold, and click Preview on and off to see where you are, then you get a mask at a precise value.

If you use the Pencil tool in Curves. you can draw a steep-sided bandpass shape / top-hat shape around the exact tones you want, to make a mask.
You can click on the picture to see where on the histogram you are. You can slope the sides of the shape you drew if you want to soften it.

Possibly best is Blending - it's WYSIWYG.
Put a desaturated copy of your image in the next layer below.
Right-click the upper, colourful Layer in the Layer window to bring up the Blending window.
With everything at defaults:
At the bottom is the gradient for Underlying layer.
Slide the leftmost, dark marker, to decide which part of that underlying, desaturated, layer you want to "show through".
Use Alt-click to split the marker you just moved, to set the start and end of the lightness range where you want the transition to be.

I tried playing with Luminosity Blending mode, without profit.

You can have layer masks in there as well if you want, to exclude part of the image, Save it as a Style, yada yada. But if I were going to need all that I would have just done the whole job quicker with the Sponge set to Desaturate - unless it were seriously important. Nothing I do would qualify as such.

I tend to faff about over little parts of an image with curves and brushes and sponges in a vain attempt to compensate for deficiencies in my photography.
Chris R

Adam Long
Posts: 30
Joined: Wed Aug 24, 2016 5:02 am
Location: Sheffield, UK
Contact:

Post by Adam Long »

Ah, great stuff. This reminds me of a good article which did the rounds a few years back. It's aimed at high-end video users but the issue is the same - essentially to make digital colour appear natural we need to decouple saturation from luminosity. The conclusions are more concerned with highlights than shadows but I suspect the underlying principle is the same:

http://www.dvinfo.net/article/productio ... ommon.html

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

I'm very familiar with the effect that Beatsy describes.

In fact, at this very time I'm working up a moth pupa for which the digital images are strikingly more reddish than anything I see by eye.

However, I disagree with the interpretation that this effect is due to the perceptual issue that Beatsy describes.

The reason for that disagreement is simple: A) with my subject, the reddish areas are the brightest parts of the subject, and B) no matter how much light I throw on the subject, even against a black background, the visual appearance never develops anything approaching the color saturation of the digital images.

My conclusion is that what I'm looking at is metamerism. My guess is that the reflection spectrum of the subject has a spike in some part of the spectrum -- probably far red -- for which the camera's response curve is quite a bit higher than my eye's. That would mean the camera sees a lot more red in the subject than I do, easily explaining the difference in appearance.

--Rik

Beatsy
Posts: 2105
Joined: Fri Jul 05, 2013 3:10 am
Location: Malvern, UK

Post by Beatsy »

I'll buy that. Today's "thing learned".

But I stand by the solution insofar as it nicely corrects the issue - even if it may have been for a different reason than I envisaged :)

Lou Jost
Posts: 5943
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Post by Lou Jost »

I wouldn't be surprised if BOTH explanations are valid. Metamerism can always pop up, more or less at random since it depends on accidental relationships between a material's reflectied spectra and human versus camera perception, but there may also be additional more regular and predictable effects, as Beatsy's general rubric about perception of shadows. If Beatsy is right about the generality of that, then metamerism would not explain his effect.

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Lou Jost wrote:there may also be additional more regular and predictable effects, as Beatsy's general rubric about perception of shadows.
Brightness definitely does affect perception of colors. As a clear and simple example, fire up Photoshop and, working in sRGB, fill three rectangles with RGB=[254,127,0], [128,64,0], and [64,32,0]. All of those have R=2G, B=0, and in Photoshop, they all show as H=30 degrees and S=100%, with B=99%, 50%, and 25%. But one of them looks like a bright orange, another is a rich brown, and the third is approaching burnt umber. If those were all the same material, exposed to different levels of illumination, then visually I would say that the brightly lit "orange" area looks much more saturated than the shadowed "burnt umber" area, even though physically they would only be different in brightness.

Nonetheless, if brightness-versus-perceived-saturation were causing the distortions that Beatsy mentions, then they should be showing up all the time, with all sorts of different subjects. That's not what I see. Instead I see the problem very commonly with brownish-red insect cuticle, exactly the same situation that Beatsy chose for illustration. I see it sometimes also with flowers, particularly red and yellow ones, even if they are bright to start with.
But I stand by the solution insofar as it nicely corrects the issue
I agree that the solution is very good in a lot of common situations, particularly when shooting insects.

But in my model, what the method is doing is using brightness as a proxy for material ID. In the example shown, it applies a heavy correction to the naked cuticle that renders as too bright and red, while leaving mostly untouched the interference colors reflected from the wings. If the subject had some dark green -- which it doesn't -- then that green would get desaturated also, and I expect that would not be such a good effect.

Please be assured, I think the described method is a very good idea. I just think it's helpful to understand why it's a good idea, to help understand about when to use it, and how it might mess up.

--Rik

Beatsy
Posts: 2105
Joined: Fri Jul 05, 2013 3:10 am
Location: Malvern, UK

Post by Beatsy »

rjlittlefield wrote: Please be assured, I think the described method is a very good idea. I just think it's helpful to understand why it's a good idea, to help understand about when to use it, and how it might mess up.
No assurance needed here, it was taken in that vein.

Got me thinking too. In relation to reddish insect cuticle (and some flowers), I was briefly tempted to summarise it as a white balance issue, but it's clearly not that simple. However, even with metamerism in play, surely it's possible to correct the colours with a combination of colour temp and white balance, coupled with application of an appropriate colour profile.

There's a lot of complication hidden in the profiling part, I know, but I cant believe a camera (or more accurately, the RAW image it captures) can't be calibrated such that colours in the final image can be automatically converted to an accurate representation of the colours seen by eye (under the same lighting conditions).

Preparing to be told why I'm wrong... :D

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

In principle, a camera could be designed at the hardware level to accurately mimic a particular human eye. All that would require is that the camera sensor be equipped with filters so that it captures three color bands whose spectral sensitivity curves match those of the eye's visual pigments (in combination with the possibly yellowed lens, etc), all across the spectrum.

Given sensitivity curves that match for all wavelengths, it would be simple to adjust color balance so that the camera and the eye saw the same colors for all materials.

But without that match the problem becomes impossible, and the reason is metamerism. As it is nicely described at https://en.wikipedia.org/wiki/Metamerism_(color)#Metameric_failure
Observer metameric failure or observer metamerism can occur because of differences in color vision between observers. The common source of observer metameric failure is colorblindness, but it is also not uncommon among "normal" observers. In all cases, the proportion of long-wavelength-sensitive cones to medium-wavelength-sensitive cones in the retina, the profile of light sensitivity in each type of cone, and the amount of yellowing in the lens and macular pigment of the eye, differs from one person to the next. This alters the relative importance of different wavelengths in a spectral power distribution to each observer's color perception. As a result, two spectrally dissimilar lights or surfaces may produce a color match for one observer but fail to match when viewed by a second observer.
Note especially that last sentence: two spectrally dissimilar lights or surfaces may produce a color match for one observer but fail to match when viewed by a second observer.

In other words, if the camera's spectral sensitivity curves do not match the eye's, then there will exist combinations of illumination and material that the camera sees as the same, but the human sees as different.

In this very special case, it should be clear that color matching is impossible, because there's no information that would allow correctly turning the camera's two identical pixel values into two different pixel values that would be needed to match the human's perception.

What is not so clear, but is also true, is that the same problem makes it impossible to correctly match almost every color in the gamut, for almost every material and almost every illumination. To paraphrase some famous quote, "You can match some of the colors some of the time, but you can't match all of the colors any of the time."
I cant believe a camera (or more accurately, the RAW image it captures) can't be calibrated such that colours in the final image can be automatically converted to an accurate representation of the colours seen by eye (under the same lighting conditions).
Is the difficulty more clear now?

--Rik

Beatsy
Posts: 2105
Joined: Fri Jul 05, 2013 3:10 am
Location: Malvern, UK

Post by Beatsy »

Very clear! At least I was right about one thing - namely preparing to be told why I'm wrong :)

However - one last grasp at a straw - I was actually only considering how *I* see the colour, not how anyone else would see it. Clearly that's a requirement, but assuming it wasn't, my "belief" would then hold true, right?

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

However - one last grasp at a straw - I was actually only considering how *I* see the colour, not how anyone else would see it. Clearly that's a requirement, but assuming it wasn't, my "belief" would then hold true, right?
Alas, no.

Suppose you have a calibration target containing a lot of colors, enough to completely fill the camera's RGB data space at some fairly dense spacing of levels. You take a picture of that target, then you construct a profile that essentially consists of one huge lookup table that turns an arbitrary camera value into another value that is exactly what's needed to display a color that matches your own perception. This process may be tedious to carry out, but it is straightforward in principle.

Now you're done, right?

Sorry, but no.

Each of those points in the camera's RGB space is now properly profiled for any subject whose materials have the same spectra as those used in the calibration target.

But if you change materials -- say by replacing a piece of insect cuticle with a piece of plant seed that looks just the same to you -- then because of the spectral differences the camera will see different colors for those two materials, and suddenly that lookup table that did exactly the right thing for the calibration target no longer does exactly the right thing for this new subject.

There's a common myth that color cailbration systems like the x-rite ColorChecker take care of all your problems.

But read carefully what the manufacturer says about them (emphasis added):
The ColorChecker Classic target is an array of 24 scientifically prepared natural, chromatic, primary and grayscale colored squares in a wide range of colors. Many of the squares represent natural objects, such as human skin, foliage and blue sky. Since they exemplify the color of their counterparts and reflect light the same way in all parts of the visible spectrum, the squares will match the colors of representative samples of natural objects under any illumination, and with any color reproduction process.
To be clear, they're not matching colors in general, they're just matching the colors of specific common subjects. Because those subjects are common, matching their colors is an excellent first step. But if your subjects happen to be made of different materials, then there no good reason to expect the colors to match for the new materials also, and in general they won't.

--Rik

Post Reply Previous topicNext topic