Alex,
I am using 0.19.6 version + I am on Windows. I will look into my code and confirm my implementation is correct. Would it be possible for you to send me the PPM files for me to compare?
OUT.use_camera_wb = 1; // Use As Shot White Balance
OUT.output_bps = 16; // 16-bit output
OUT.no_auto_bright = 1; // Do not contrast stretch the image
OUT.output_color = 1; // sRGB space
OUT.gamm[0] = 1/2.4; // power for sRGB
OUT.gamm[1] = 12.92; // toe slope for sRGB
Hi Alex,
Thanks for confirming the settings are the same and for updating the help. However, I am getting differences for a RAW file from LEAF. I am providing a link to the file below:
My workflow is
open_file()
unpack()
// Set the imgdata.params as described above
get_mem_image_format(...)
dcraw_process()
dcraw_make_mem_image()
// Copy the rendered image
According to Remarks section of this page:
The stride is the width of a single row of pixels (a scan line), rounded up to a four-byte boundary. If the stride is positive, the bitmap is top-down. If the stride is negative, the bitmap is bottom-up.
So maybe yes, it could be an alignment problem. Otherwise I have no idea, why it doesn't work.
If windows bitmap rows are aligned full data copy (not per-line copy) will result into 'lost sync' image, because libraw_processed_image_t data is not aligned, but rows are packed w/o gaps.
It's not about C#. I am just calling the C methods provided by Libraw. I do not even change the bytes in data field of libraw_processed_image_t, just simply copying it to the memory allocated for Windows Bitmap.
BTW I forgot to mention that also using following Libraw setter methods for process:
libraw_set_output_bps -> 8
libraw_set_output_color -> 0
Update:
Also tried the libraw_dcraw_ppm_tiff_writer method using TIFF option which creates a correct, color TIFF file!
According to your screenshots, there is 'line sync' error. Is there any possibility that source data for bitmap should have every row aligned (on 4 or 16 or whatever bytes)?
I do not know why your result is monochrome (never seen your code and, again, know nothing about C#). LibRaw output is definitely not, according to your test w/ PPM write.
I see two possible ways to determine real maximum value:
1) Use camera-provided linear_max values (if any)
2) and/or analyze histogram, ignore hot pixels (say 0.01% of pixels, or custom sensitivity) compare result with imgdata.color.maximum (format-specified maximum) and decrease the maximum only if calculated histogram peak is (say) one stop below color.maximum or greater.
I need to come up with a way to detect this incorrect value and set a proper saturation value. Doing what you suggested earlier, inspecting the histogram and discarding higher bins with no values in them, works for this RAW file. But it breaks where you have a dark exposure (lots of zeros in the histogram) and the camera reports the correct saturation level (see this RAW file as an example). Using the method you've described, the RAW comes out with extreme noise because it is erroneously being brightened (due to a very low saturation value). How would you recommend going about this?
Also, if you don't mind answering some of the questions I had in my last post I'd appreciate it. It seems like when it comes to RAW processing, only a handful of folks have a good idea of how it's done--folks like Dave Coffin, you, and Adobe. This knowledge is pretty valuable, and should easily accessible so that our collective knowledge can advance.
I'm not using LibRaw directly, so I can't set the maximum or dynamically compute the adjustment threshold. But even without any of this, I believe I should be able to test the effects of either of these by simply setting the corresponding values (-c for adjust_maximum_thr, and -S for saturation point) in the command line when calling `dcraw_emu`.
As I mentioned, `adjust_maximum_thr` has no effect. I've tried 0.01, 0.1, 0.5, 1.0, 1.5--all pink skies. As for the saturation point, setting it to 16200 (and keeping -H 0 to clip highlights) doesn't help--in fact, it looks worse. Interestingly, reducing the saturation point to about 9000 (while clipping highlights) does help.
I'm not entirely familiar with the steps taken by the library. There seem to be details scattered around this forum, but no central piece of documentation that explains exactly what the conversion process is; the API docs just mention all different flags, but provide little context on how it all ties in together. But all this to say, I imagine the saturation point is used to normalize the data to 16 bits, so:
1. Load raw values, with data in range [min_sensor_value, max_sensor_value]
2. Subtract black point, taking data to range [0, max_sensor_value - min_sensor_value]
3. Divide by saturation point, taking data to range [0.0, number supposed to be 1, but could be more]
4. Clip (depending on highlight mode), taking data to range [0.0, 1.0]
5. Scale to `uint16_t` range, taking data to range [0, 65563]
6. Remaining steps.
If the above is correct, then why does setting the saturation point to exactly 16200 still produce pink skies? You can test this on your end with the RAW file to see exactly what I mean. Thanks again for your help.
adjust_maximum implementation operates with real data maximum found in real file data in visible area. In this specific file some pixels in G2 channel are way above other channels (hotpixels?) that's why adjust_maximum fooled.
See this RawDigger screenshot: https://www.dropbox.com/s/o7d3324n43aurln/screenshot%202020-09-13%2009.1...
Notes:
- black subtraction turned off, that's why entire image is pink
- matte overlay above most image is selection
- both full area and selection stats (two upper arrows) shows G2 channel maximum equal to 16200 (black not subtracted), so problem not in edge pixels but in image area.
Possible solutions:
- implement own adjust_maximum that will ignore outliers (e.g. by calculating full histogram and ignore upper bins with 1-3-10 pixel in it)
- use imgdata.color.linear_max[] as real image maximums.
Thank you for reporting, fixed/cleaned up by this: https://github.com/LibRaw/LibRaw/commit/37c7b517c177e4e89ae8f95baed0b2a3...
https://www.dropbox.com/s/19dm6nmr47fg0ua/raw_leaf_aptus_22.mos.ppm?dl=0
Alex,
I am using 0.19.6 version + I am on Windows. I will look into my code and confirm my implementation is correct. Would it be possible for you to send me the PPM files for me to compare?
Regards,
Dinesh
Cloned your settings into mem_image_sample.cpp:
compared with
Results are the same:
Fixed the link permissions. Here is the link again:
https://drive.google.com/file/d/1HkNwHiFGjRaz7y6nd6lWlO6W0BVE5p1z/view?u...
Dinesh
Link is not accessible
Hi Alex,
Thanks for confirming the settings are the same and for updating the help. However, I am getting differences for a RAW file from LEAF. I am providing a link to the file below:
https://drive.google.com/file/d/1HkNwHiFGjRaz7y6nd6lWlO6W0BVE5p1z/view?u...
My workflow is
open_file()
unpack()
// Set the imgdata.params as described above
get_mem_image_format(...)
dcraw_process()
dcraw_make_mem_image()
// Copy the rendered image
For a NEF file in my possession, it works fine.
Regards,
Dinesh
Fixed by https://github.com/LibRaw/LibRaw/commit/c6339c13a991571822518d13e7288012...
dcraw_emu uses same dcraw_process() call, so with same parameters output should be the same.
-6 results in 16-bit output:
case '6':
OUT.output_bps = 16;
break;
(looks like docs need to be corrected)
Can anybody help me with that?
How to create a windows bitmap using data field from libraw_processed_image_t structure?
According to Remarks section of this page:
The stride is the width of a single row of pixels (a scan line), rounded up to a four-byte boundary. If the stride is positive, the bitmap is top-down. If the stride is negative, the bitmap is bottom-up.
So maybe yes, it could be an alignment problem. Otherwise I have no idea, why it doesn't work.
If windows bitmap rows are aligned full data copy (not per-line copy) will result into 'lost sync' image, because libraw_processed_image_t data is not aligned, but rows are packed w/o gaps.
It's not about C#. I am just calling the C methods provided by Libraw. I do not even change the bytes in data field of libraw_processed_image_t, just simply copying it to the memory allocated for Windows Bitmap.
BTW I forgot to mention that also using following Libraw setter methods for process:
libraw_set_output_bps -> 8
libraw_set_output_color -> 0
Update:
Also tried the libraw_dcraw_ppm_tiff_writer method using TIFF option which creates a correct, color TIFF file!
Sorry, know nothing about C# and Windows bitmap.
According to your screenshots, there is 'line sync' error. Is there any possibility that source data for bitmap should have every row aligned (on 4 or 16 or whatever bytes)?
I do not know why your result is monochrome (never seen your code and, again, know nothing about C#). LibRaw output is definitely not, according to your test w/ PPM write.
Yes, it creates a PPM file.
But what I would like is to create a simple BMP format.
Does mem_image_sample.cpp work as expected with this file?
I see two possible ways to determine real maximum value:
1) Use camera-provided linear_max values (if any)
2) and/or analyze histogram, ignore hot pixels (say 0.01% of pixels, or custom sensitivity) compare result with imgdata.color.maximum (format-specified maximum) and decrease the maximum only if calculated histogram peak is (say) one stop below color.maximum or greater.
I need to come up with a way to detect this incorrect value and set a proper saturation value. Doing what you suggested earlier, inspecting the histogram and discarding higher bins with no values in them, works for this RAW file. But it breaks where you have a dark exposure (lots of zeros in the histogram) and the camera reports the correct saturation level (see this RAW file as an example). Using the method you've described, the RAW comes out with extreme noise because it is erroneously being brightened (due to a very low saturation value). How would you recommend going about this?
Also, if you don't mind answering some of the questions I had in my last post I'd appreciate it. It seems like when it comes to RAW processing, only a handful of folks have a good idea of how it's done--folks like Dave Coffin, you, and Adobe. This knowledge is pretty valuable, and should easily accessible so that our collective knowledge can advance.
Yes, adjust_maximum_thr is no help here because data maximum value for this shot is not below 'possible but incorrect calculated maximum'.
Correct user_sat (-S param) for this shot is about 11300
Hi Alex, thanks for the speedy response.
I'm not using LibRaw directly, so I can't set the maximum or dynamically compute the adjustment threshold. But even without any of this, I believe I should be able to test the effects of either of these by simply setting the corresponding values (-c for adjust_maximum_thr, and -S for saturation point) in the command line when calling `dcraw_emu`.
As I mentioned, `adjust_maximum_thr` has no effect. I've tried 0.01, 0.1, 0.5, 1.0, 1.5--all pink skies. As for the saturation point, setting it to 16200 (and keeping -H 0 to clip highlights) doesn't help--in fact, it looks worse. Interestingly, reducing the saturation point to about 9000 (while clipping highlights) does help.
I'm not entirely familiar with the steps taken by the library. There seem to be details scattered around this forum, but no central piece of documentation that explains exactly what the conversion process is; the API docs just mention all different flags, but provide little context on how it all ties in together. But all this to say, I imagine the saturation point is used to normalize the data to 16 bits, so:
1. Load raw values, with data in range [min_sensor_value, max_sensor_value]
2. Subtract black point, taking data to range [0, max_sensor_value - min_sensor_value]
3. Divide by saturation point, taking data to range [0.0, number supposed to be 1, but could be more]
4. Clip (depending on highlight mode), taking data to range [0.0, 1.0]
5. Scale to `uint16_t` range, taking data to range [0, 65563]
6. Remaining steps.
If the above is correct, then why does setting the saturation point to exactly 16200 still produce pink skies? You can test this on your end with the RAW file to see exactly what I mean. Thanks again for your help.
Regards,
Yusuf.
adjust_maximum implementation operates with real data maximum found in real file data in visible area. In this specific file some pixels in G2 channel are way above other channels (hotpixels?) that's why adjust_maximum fooled.
See this RawDigger screenshot: https://www.dropbox.com/s/o7d3324n43aurln/screenshot%202020-09-13%2009.1...
Notes:
- black subtraction turned off, that's why entire image is pink
- matte overlay above most image is selection
- both full area and selection stats (two upper arrows) shows G2 channel maximum equal to 16200 (black not subtracted), so problem not in edge pixels but in image area.
Possible solutions:
- implement own adjust_maximum that will ignore outliers (e.g. by calculating full histogram and ignore upper bins with 1-3-10 pixel in it)
- use imgdata.color.linear_max[] as real image maximums.
after dealing with a "kind of new to me" way of reading basic data out of exif ...
bingo!!!
pyexiftool worked awesome, I reads every format of image (if it has exif) and if not , its because the image has no exif at all
thanks a lot! will check them all ;)
Or exiftool just as well.
There are a few python wrappers already available:
https://github.com/smarnach/pyexiftool
https://github.com/guinslym/pyexifinfo
https://hvdwolf.github.io/pyExifToolGUI/
LibRaw is not about (full) metadata extraction, it is about raw data decoding (although it decodes some metadata too).
Probably Exiv2 library (w/ python wrapper) is better suited for your task.
Pages