You are multiplying _after_ normalization, and you are using wb coefficients that are valid before your multiplication.
You have purple highlights because you are not applying proper clipping. To operate in energy domain you need to have a better picture of spectral response, before and after demosaicking.
1) White balance should be applied before interpolation (half and bilinear demosaics could handle non-balanced data, but others are not).
2) I could not understand this step:
"Multiply each pixel by its bayer filter colour times 2 for green and times 4 for red/blue"
Is it 'poor man approximate white balance'? If so, WB values should be adjusted to 0.5/0.25 :)
3) rgb_cam is 'camera to sRGB' matrix. If your output space is not sRGB, color profile matrix (rgb_cam) should be adjusted to output space (see convert_to_rgb() source for an example)
I could well be barking up the wrong tree, but doesn't the code handle merging of 4-shot Pentax files which is really quite a similar concept (afaict).
Luckily, I'm not too bothered by this (though one of my users is nagging me about it).
We do not have any plans to drop SuperCCD support:
- raw data is extracted as is (in two subframes), keeping sensor aspect ratio
- processing is adopted from dcraw
Also, we do not have any plans to improve processing part, our goal is raw data and metadata extraction.
Please don't drop Fujitsu Super-CCD support, if that's done, people who use my software won't be able to reprocess old images (which the astrophotography folks often do).
I'll take a look at raw2image_ex() to see I can understand it.
It's great that you get it from the CR2 file, but you didn't answer my question - does it represent the "white level" or not? And as an aside is it normal to see pixel values > that level?
You are multiplying _after_ normalization, and you are using wb coefficients that are valid before your multiplication.
You have purple highlights because you are not applying proper clipping. To operate in energy domain you need to have a better picture of spectral response, before and after demosaicking.
1) White balance should be applied before interpolation (half and bilinear demosaics could handle non-balanced data, but others are not).
2) I could not understand this step:
"Multiply each pixel by its bayer filter colour times 2 for green and times 4 for red/blue"
Is it 'poor man approximate white balance'? If so, WB values should be adjusted to 0.5/0.25 :)
3) rgb_cam is 'camera to sRGB' matrix. If your output space is not sRGB, color profile matrix (rgb_cam) should be adjusted to output space (see convert_to_rgb() source for an example)
I could well be barking up the wrong tree, but doesn't the code handle merging of 4-shot Pentax files which is really quite a similar concept (afaict).
Luckily, I'm not too bothered by this (though one of my users is nagging me about it).
Yes, use shot_select parameter to extract second frame.
Correct processing/merge/etc is the task of your code. LibRaw::dcraw_process() does not do merge.
https://www.libraw.org/comment/5233#comment-5233
Do you have a schedule for this?
yes
Is it planned to support the Canon EOS R?
Maybe this helps: https://github.com/lclevy/canon_cr3
Is there a getter for libraw_internal_data.unpacker_data.fuji_layout or will I need to sub-class to get at that?
I found is_fuji_rotated() which returns libraw_internal_data.internal_output_params.fuji_width
Thanks
Yes, this is right piece of code
I couldn't locate the exact version I'm using in github (I'm using version 18.8) The code in question reads as follows:
We do not have any plans to drop SuperCCD support:
- raw data is extracted as is (in two subframes), keeping sensor aspect ratio
- processing is adopted from dcraw
Also, we do not have any plans to improve processing part, our goal is raw data and metadata extraction.
I do not know what exact version you use. Could you please use github URLs with #l[lineno] markers in URL to point to exact version/exact line.
Just to be sure, can you confirm that the code at line 2783 in libraw_cxx.cpp is the relevant code?
Thanks
Please don't drop Fujitsu Super-CCD support, if that's done, people who use my software won't be able to reprocess old images (which the astrophotography folks often do).
I'll take a look at raw2image_ex() to see I can understand it.
Dave
Yes, Fuji Super-CCD is completely different.
Look into raw2image_ex() source for details.
BTW, today, in 2019, it is good enough idea to drop Super-CCD support.
AFAIK, this is vendor-specified value for white point.
It's great that you get it from the CR2 file, but you didn't answer my question - does it represent the "white level" or not? And as an aside is it normal to see pixel values > that level?
For Canon cameras, imgdata.color.maximum is set by metadata provided by vendor in CR2 file.
Not doing so results in 'pink highlights' problem.
By default, LibRaw *uses* DNG color matrix, quote from docs:
1 (default): use embedded color profile (if present) for DNG files (always); for other files only if use_camera_wb is set;
RawDigger settings are different:
0 for 'built-int color profile'
3 for 'Embedded in RAW'
Thanks for your response!
Yes, I thought RawDigger used LibRaw.
I set no_auto_bright = 0, and it didn't really change the result. The LibRaw output is still much darker than the RawDigger output.
Are there any other settings I should be changing in LibRaw?
(Again, I suspect LibRaw is not using the DNG color matrix, but I'm not sure...)
This is (list of) 'surely supported' (tested) ones.
UPD: published 'just for information', LibRaw does not use this list internally.
Come on - you know what the word supported means as do I - after all you publish as list of supported cameras!
Could you please define 'supported' ?
Pages