According to Remarks section of this page:
The stride is the width of a single row of pixels (a scan line), rounded up to a four-byte boundary. If the stride is positive, the bitmap is top-down. If the stride is negative, the bitmap is bottom-up.
So maybe yes, it could be an alignment problem. Otherwise I have no idea, why it doesn't work.
If windows bitmap rows are aligned full data copy (not per-line copy) will result into 'lost sync' image, because libraw_processed_image_t data is not aligned, but rows are packed w/o gaps.
It's not about C#. I am just calling the C methods provided by Libraw. I do not even change the bytes in data field of libraw_processed_image_t, just simply copying it to the memory allocated for Windows Bitmap.
BTW I forgot to mention that also using following Libraw setter methods for process:
libraw_set_output_bps -> 8
libraw_set_output_color -> 0
Update:
Also tried the libraw_dcraw_ppm_tiff_writer method using TIFF option which creates a correct, color TIFF file!
According to your screenshots, there is 'line sync' error. Is there any possibility that source data for bitmap should have every row aligned (on 4 or 16 or whatever bytes)?
I do not know why your result is monochrome (never seen your code and, again, know nothing about C#). LibRaw output is definitely not, according to your test w/ PPM write.
I see two possible ways to determine real maximum value:
1) Use camera-provided linear_max values (if any)
2) and/or analyze histogram, ignore hot pixels (say 0.01% of pixels, or custom sensitivity) compare result with imgdata.color.maximum (format-specified maximum) and decrease the maximum only if calculated histogram peak is (say) one stop below color.maximum or greater.
I need to come up with a way to detect this incorrect value and set a proper saturation value. Doing what you suggested earlier, inspecting the histogram and discarding higher bins with no values in them, works for this RAW file. But it breaks where you have a dark exposure (lots of zeros in the histogram) and the camera reports the correct saturation level (see this RAW file as an example). Using the method you've described, the RAW comes out with extreme noise because it is erroneously being brightened (due to a very low saturation value). How would you recommend going about this?
Also, if you don't mind answering some of the questions I had in my last post I'd appreciate it. It seems like when it comes to RAW processing, only a handful of folks have a good idea of how it's done--folks like Dave Coffin, you, and Adobe. This knowledge is pretty valuable, and should easily accessible so that our collective knowledge can advance.
I'm not using LibRaw directly, so I can't set the maximum or dynamically compute the adjustment threshold. But even without any of this, I believe I should be able to test the effects of either of these by simply setting the corresponding values (-c for adjust_maximum_thr, and -S for saturation point) in the command line when calling `dcraw_emu`.
As I mentioned, `adjust_maximum_thr` has no effect. I've tried 0.01, 0.1, 0.5, 1.0, 1.5--all pink skies. As for the saturation point, setting it to 16200 (and keeping -H 0 to clip highlights) doesn't help--in fact, it looks worse. Interestingly, reducing the saturation point to about 9000 (while clipping highlights) does help.
I'm not entirely familiar with the steps taken by the library. There seem to be details scattered around this forum, but no central piece of documentation that explains exactly what the conversion process is; the API docs just mention all different flags, but provide little context on how it all ties in together. But all this to say, I imagine the saturation point is used to normalize the data to 16 bits, so:
1. Load raw values, with data in range [min_sensor_value, max_sensor_value]
2. Subtract black point, taking data to range [0, max_sensor_value - min_sensor_value]
3. Divide by saturation point, taking data to range [0.0, number supposed to be 1, but could be more]
4. Clip (depending on highlight mode), taking data to range [0.0, 1.0]
5. Scale to `uint16_t` range, taking data to range [0, 65563]
6. Remaining steps.
If the above is correct, then why does setting the saturation point to exactly 16200 still produce pink skies? You can test this on your end with the RAW file to see exactly what I mean. Thanks again for your help.
adjust_maximum implementation operates with real data maximum found in real file data in visible area. In this specific file some pixels in G2 channel are way above other channels (hotpixels?) that's why adjust_maximum fooled.
See this RawDigger screenshot: https://www.dropbox.com/s/o7d3324n43aurln/screenshot%202020-09-13%2009.1...
Notes:
- black subtraction turned off, that's why entire image is pink
- matte overlay above most image is selection
- both full area and selection stats (two upper arrows) shows G2 channel maximum equal to 16200 (black not subtracted), so problem not in edge pixels but in image area.
Possible solutions:
- implement own adjust_maximum that will ignore outliers (e.g. by calculating full histogram and ignore upper bins with 1-3-10 pixel in it)
- use imgdata.color.linear_max[] as real image maximums.
ret = libraw_unpack(raw);
...
ret = libraw_dcraw_process(raw);
...
libraw_processed_image_t *image = libraw_dcraw_make_mem_image(raw, &ret);
it works.
Now my question is. Is this because the change of the margin I have this issue? To be sure I would need a flag to know if the margin was changed or not. Is there such a flag like that? If no would it be possible to put a flag? Or does it exist a way to know it by analyzing some data?
There something between 0.19 and 0.20 that makes my code not working anymore. It is likely there is something wrong in my code. Something I do not check enough to ensure the right pattern.
LibRaw demosaic uses same imgdata.idata.filters field to determine specific pixel color.
I do not see any problem with 2000D/1200D/1300D files and LibRaw 0.20.
Could you please reproduce the problem using LibRaw calls or samples (e.g. dcraw_emu) only?
do not know how BOTTOM-UP is interpreted by software you use to check. If rows are reordered, but Bayer pattern is not, this will result in incorrect demosaic.
It is just a flag to switch Bayer Pattern to match the right one, as you said ;). And in the cae I save in TIFF it is not used.
Try to output files with even row count. If my hypothesis is right, this will result into the same behaviour for both cameras.
I thought it was something like that. BUT ....
I can notice inconsistent behavior between 1200D and 1300D where the (effective) image size is the same (5202x3464 pixels) and even.
Maybe I'm doing something wrong, but ... I don't see what and where as I apply the same process to each image.
In LibRaw 0.19 output size height was odd for both cameras.
I do not know how BOTTOM-UP is interpreted by software you use to check. If rows are reordered, but bayer pattern is not, this will result in incorrect demosaic.
Try to output files with even row count. If my hypothesis is right, this will result into the same behaviour for both cameras.
Fixed by https://github.com/LibRaw/LibRaw/commit/c6339c13a991571822518d13e7288012...
dcraw_emu uses same dcraw_process() call, so with same parameters output should be the same.
-6 results in 16-bit output:
case '6':
OUT.output_bps = 16;
break;
(looks like docs need to be corrected)
Can anybody help me with that?
How to create a windows bitmap using data field from libraw_processed_image_t structure?
According to Remarks section of this page:
The stride is the width of a single row of pixels (a scan line), rounded up to a four-byte boundary. If the stride is positive, the bitmap is top-down. If the stride is negative, the bitmap is bottom-up.
So maybe yes, it could be an alignment problem. Otherwise I have no idea, why it doesn't work.
If windows bitmap rows are aligned full data copy (not per-line copy) will result into 'lost sync' image, because libraw_processed_image_t data is not aligned, but rows are packed w/o gaps.
It's not about C#. I am just calling the C methods provided by Libraw. I do not even change the bytes in data field of libraw_processed_image_t, just simply copying it to the memory allocated for Windows Bitmap.
BTW I forgot to mention that also using following Libraw setter methods for process:
libraw_set_output_bps -> 8
libraw_set_output_color -> 0
Update:
Also tried the libraw_dcraw_ppm_tiff_writer method using TIFF option which creates a correct, color TIFF file!
Sorry, know nothing about C# and Windows bitmap.
According to your screenshots, there is 'line sync' error. Is there any possibility that source data for bitmap should have every row aligned (on 4 or 16 or whatever bytes)?
I do not know why your result is monochrome (never seen your code and, again, know nothing about C#). LibRaw output is definitely not, according to your test w/ PPM write.
Yes, it creates a PPM file.
But what I would like is to create a simple BMP format.
Does mem_image_sample.cpp work as expected with this file?
I see two possible ways to determine real maximum value:
1) Use camera-provided linear_max values (if any)
2) and/or analyze histogram, ignore hot pixels (say 0.01% of pixels, or custom sensitivity) compare result with imgdata.color.maximum (format-specified maximum) and decrease the maximum only if calculated histogram peak is (say) one stop below color.maximum or greater.
I need to come up with a way to detect this incorrect value and set a proper saturation value. Doing what you suggested earlier, inspecting the histogram and discarding higher bins with no values in them, works for this RAW file. But it breaks where you have a dark exposure (lots of zeros in the histogram) and the camera reports the correct saturation level (see this RAW file as an example). Using the method you've described, the RAW comes out with extreme noise because it is erroneously being brightened (due to a very low saturation value). How would you recommend going about this?
Also, if you don't mind answering some of the questions I had in my last post I'd appreciate it. It seems like when it comes to RAW processing, only a handful of folks have a good idea of how it's done--folks like Dave Coffin, you, and Adobe. This knowledge is pretty valuable, and should easily accessible so that our collective knowledge can advance.
Yes, adjust_maximum_thr is no help here because data maximum value for this shot is not below 'possible but incorrect calculated maximum'.
Correct user_sat (-S param) for this shot is about 11300
Hi Alex, thanks for the speedy response.
I'm not using LibRaw directly, so I can't set the maximum or dynamically compute the adjustment threshold. But even without any of this, I believe I should be able to test the effects of either of these by simply setting the corresponding values (-c for adjust_maximum_thr, and -S for saturation point) in the command line when calling `dcraw_emu`.
As I mentioned, `adjust_maximum_thr` has no effect. I've tried 0.01, 0.1, 0.5, 1.0, 1.5--all pink skies. As for the saturation point, setting it to 16200 (and keeping -H 0 to clip highlights) doesn't help--in fact, it looks worse. Interestingly, reducing the saturation point to about 9000 (while clipping highlights) does help.
I'm not entirely familiar with the steps taken by the library. There seem to be details scattered around this forum, but no central piece of documentation that explains exactly what the conversion process is; the API docs just mention all different flags, but provide little context on how it all ties in together. But all this to say, I imagine the saturation point is used to normalize the data to 16 bits, so:
1. Load raw values, with data in range [min_sensor_value, max_sensor_value]
2. Subtract black point, taking data to range [0, max_sensor_value - min_sensor_value]
3. Divide by saturation point, taking data to range [0.0, number supposed to be 1, but could be more]
4. Clip (depending on highlight mode), taking data to range [0.0, 1.0]
5. Scale to `uint16_t` range, taking data to range [0, 65563]
6. Remaining steps.
If the above is correct, then why does setting the saturation point to exactly 16200 still produce pink skies? You can test this on your end with the RAW file to see exactly what I mean. Thanks again for your help.
Regards,
Yusuf.
adjust_maximum implementation operates with real data maximum found in real file data in visible area. In this specific file some pixels in G2 channel are way above other channels (hotpixels?) that's why adjust_maximum fooled.
See this RawDigger screenshot: https://www.dropbox.com/s/o7d3324n43aurln/screenshot%202020-09-13%2009.1...
Notes:
- black subtraction turned off, that's why entire image is pink
- matte overlay above most image is selection
- both full area and selection stats (two upper arrows) shows G2 channel maximum equal to 16200 (black not subtracted), so problem not in edge pixels but in image area.
Possible solutions:
- implement own adjust_maximum that will ignore outliers (e.g. by calculating full histogram and ignore upper bins with 1-3-10 pixel in it)
- use imgdata.color.linear_max[] as real image maximums.
after dealing with a "kind of new to me" way of reading basic data out of exif ...
bingo!!!
pyexiftool worked awesome, I reads every format of image (if it has exif) and if not , its because the image has no exif at all
thanks a lot! will check them all ;)
Or exiftool just as well.
There are a few python wrappers already available:
https://github.com/smarnach/pyexiftool
https://github.com/guinslym/pyexifinfo
https://hvdwolf.github.io/pyExifToolGUI/
LibRaw is not about (full) metadata extraction, it is about raw data decoding (although it decodes some metadata too).
Probably Exiv2 library (w/ python wrapper) is better suited for your task.
Glad to know that the problem has solved.
Row order reversing will work differently for odd and even row count:
Oh boy... I think I owe you a wine bottle.
Thanks a lot. Thanks for everything. That was a so stupid error.
With 0.19 I was flipping the image for real. This is why it was working.
Row order reversing will work differently for odd and even row count:
Imagine the pattern
RGRGRGR (row #0)
GBGBGBG (row #1)
RGRGRGR (row #2)
For even number of rows (say 2), reversed pattern will be
GBGB.. (former row #1)
RGRG... (former row #0)
For odd number, reversed pattern is the same as original one.
So, if BOTTOM-UP really changes bayer pattern you need to ensure that it works correct for both odd and even row count.
Alternatively, calculate your pattern for left-low corner, not for left-top:
When I use the
it works.
Now my question is. Is this because the change of the margin I have this issue? To be sure I would need a flag to know if the margin was changed or not. Is there such a flag like that? If no would it be possible to put a flag? Or does it exist a way to know it by analyzing some data?
There something between 0.19 and 0.20 that makes my code not working anymore. It is likely there is something wrong in my code. Something I do not check enough to ensure the right pattern.
LibRaw demosaic uses same imgdata.idata.filters field to determine specific pixel color.
I do not see any problem with 2000D/1200D/1300D files and LibRaw 0.20.
Could you please reproduce the problem using LibRaw calls or samples (e.g. dcraw_emu) only?
do not know how BOTTOM-UP is interpreted by software you use to check. If rows are reordered, but Bayer pattern is not, this will result in incorrect demosaic.
It is just a flag to switch Bayer Pattern to match the right one, as you said ;). And in the cae I save in TIFF it is not used.
Try to output files with even row count. If my hypothesis is right, this will result into the same behaviour for both cameras.
I thought it was something like that. BUT ....
I can notice inconsistent behavior between 1200D and 1300D where the (effective) image size is the same (5202x3464 pixels) and even.
Maybe I'm doing something wrong, but ... I don't see what and where as I apply the same process to each image.
In LibRaw 0.19 output size height was odd for both cameras.
I do not know how BOTTOM-UP is interpreted by software you use to check. If rows are reordered, but bayer pattern is not, this will result in incorrect demosaic.
Try to output files with even row count. If my hypothesis is right, this will result into the same behaviour for both cameras.
Pages