Hi all.
I'm getting pink clouds when trying to convert a RAW to a TIF/JPEG with LibRaw 0.20.0. I've run into this a while ago in fact, when using the RawPy wrapper library for Python.
Here's the RAW file in question, along with the TIF I get from LibRaw. I'm running the following:
./dcraw_emu -H 0 -o 1 -W -q 3 -w -T -g 1 1 -c 0.75 4.CR2
I've tried different values for "adjust_maximum_thr" (the -c option), and it has absolutely no effect whatsoever. Interestingly, Adobe Camera RAW, RawTherapee, and DarkTable are all unaffected by this issue. All help on this would be much appreciated; I need to get past this issue ASAP.
Regards,
Yusuf.
adjust_maximum implementation
adjust_maximum implementation operates with real data maximum found in real file data in visible area. In this specific file some pixels in G2 channel are way above other channels (hotpixels?) that's why adjust_maximum fooled.
See this RawDigger screenshot: https://www.dropbox.com/s/o7d3324n43aurln/screenshot%202020-09-13%2009.1...
Notes:
- black subtraction turned off, that's why entire image is pink
- matte overlay above most image is selection
- both full area and selection stats (two upper arrows) shows G2 channel maximum equal to 16200 (black not subtracted), so problem not in edge pixels but in image area.
Possible solutions:
- implement own adjust_maximum that will ignore outliers (e.g. by calculating full histogram and ignore upper bins with 1-3-10 pixel in it)
- use imgdata.color.linear_max[] as real image maximums.
-- Alex Tutubalin @LibRaw LLC
Hi Alex, thanks for the
Hi Alex, thanks for the speedy response.
I'm not using LibRaw directly, so I can't set the maximum or dynamically compute the adjustment threshold. But even without any of this, I believe I should be able to test the effects of either of these by simply setting the corresponding values (-c for adjust_maximum_thr, and -S for saturation point) in the command line when calling `dcraw_emu`.
As I mentioned, `adjust_maximum_thr` has no effect. I've tried 0.01, 0.1, 0.5, 1.0, 1.5--all pink skies. As for the saturation point, setting it to 16200 (and keeping -H 0 to clip highlights) doesn't help--in fact, it looks worse. Interestingly, reducing the saturation point to about 9000 (while clipping highlights) does help.
I'm not entirely familiar with the steps taken by the library. There seem to be details scattered around this forum, but no central piece of documentation that explains exactly what the conversion process is; the API docs just mention all different flags, but provide little context on how it all ties in together. But all this to say, I imagine the saturation point is used to normalize the data to 16 bits, so:
1. Load raw values, with data in range [min_sensor_value, max_sensor_value]
2. Subtract black point, taking data to range [0, max_sensor_value - min_sensor_value]
3. Divide by saturation point, taking data to range [0.0, number supposed to be 1, but could be more]
4. Clip (depending on highlight mode), taking data to range [0.0, 1.0]
5. Scale to `uint16_t` range, taking data to range [0, 65563]
6. Remaining steps.
If the above is correct, then why does setting the saturation point to exactly 16200 still produce pink skies? You can test this on your end with the RAW file to see exactly what I mean. Thanks again for your help.
Regards,
Yusuf.
Yes, adjust_maximum_thr is no
Yes, adjust_maximum_thr is no help here because data maximum value for this shot is not below 'possible but incorrect calculated maximum'.
Correct user_sat (-S param) for this shot is about 11300
-- Alex Tutubalin @LibRaw LLC
I need to come up with a way
I need to come up with a way to detect this incorrect value and set a proper saturation value. Doing what you suggested earlier, inspecting the histogram and discarding higher bins with no values in them, works for this RAW file. But it breaks where you have a dark exposure (lots of zeros in the histogram) and the camera reports the correct saturation level (see this RAW file as an example). Using the method you've described, the RAW comes out with extreme noise because it is erroneously being brightened (due to a very low saturation value). How would you recommend going about this?
Also, if you don't mind answering some of the questions I had in my last post I'd appreciate it. It seems like when it comes to RAW processing, only a handful of folks have a good idea of how it's done--folks like Dave Coffin, you, and Adobe. This knowledge is pretty valuable, and should easily accessible so that our collective knowledge can advance.
I see two possible ways to
I see two possible ways to determine real maximum value:
1) Use camera-provided linear_max values (if any)
2) and/or analyze histogram, ignore hot pixels (say 0.01% of pixels, or custom sensitivity) compare result with imgdata.color.maximum (format-specified maximum) and decrease the maximum only if calculated histogram peak is (say) one stop below color.maximum or greater.
-- Alex Tutubalin @LibRaw LLC