Hi Alex,
Is the API call is_fuji_rotated() the one to use to identify Super CCD sensors?
Also it looks like number of pixels in imgdata.rawdata.raw_image and number of pixels in imgdata.image seem to be consistent with raw_height/raw_width and height/width.
For Super-CCD sensors, there is a 45 degree rotation applied to the image. So the height/width fields in sizes report the bounding box that is needed to store this rotated image i.e. if you apply a 45 degree rotation on an image with dimensions 2177 rows and 2894 cols, u will need a buffer of size 3587 x 3588 pixels to store it. It will have a bunch of black pixels around the image. So, image contains height*width*4*2 bytes.
A few follow up questions I have are:
1. Does COLOR() function work for these types of images? Because for this specific image, cdesc is "RGBG" and COLOR(0, 0), COLOR(0, 1), COLOR(1, 0) and COLOR(1, 1) are 0, 2, 1, 3 respectively which results in a "RBGG" Bayer Pattern but I don't think that is correct because these are not really Bayer images?
2. Which fields/props in imgdata do I use to detect these kinds of files such that i can update my copying code suitably? The call to raw2image is not needed for typical Bayer or X-Trans sensors. You can copy from raw_image or color3_image etc.
So raw2image() works the same way for these files i.e. only one of the 4 channels will have a valid value and this will correspond to COLOR(r, c).
Is the size of the buffer (n bytes) allocated after raw2image() equal iheight*iwidth*2? This info is necessary for me to allocate the suitable output buffer in my code.
Alex,
This API call appears to only impact the output width/height. Do we treat this to be the visible dimensions (height/width) to just extract the visible portion of the RAW data?
Does this mean the COLOR(r, c) does not work for these formats?
To reiterate, what I am trying to do is extract the Visible Portion of the RAW Image and implement the pipeline myself? Will calling raw2image() on this image only give me the visible portion of the CFA? If so, what dimensions should I assume the resulting buffer has
So, is there a way to identify if the file is from a Super-CCD sensor such that I can branch my code to handle it separately.
Alex,
Thanks for that. I will read up on Fuji Super CCD images and go over the dcraw code.
Does that mean that some of the properties in the sizes field such as width, height are incorrect/invalid for these files? I was just trying to just read out the visible part of the raw image for me to apply my own camera pipeline.
Also, how do I identify if a file has Super-CCD sensor?
This is the way how Fuji Super-CCD files are processed (this method borrowed from dcraw.c):
- raw data is not 2:3 but stretched in one dimension (different for different cameras)
- color order is not rg/gb, but green columns and r/b columns
On processing:
- entire image is rotated 45 deg to get normal bayer pattern with diagonal greens (it is also unstretched to get 2:3 aspect ratio)
- standard debayering is used
- output rotated back and cropped
Matrix conversion is not gamut limited (unless you cut negative values in some intermediate steps)
ccm is 'camera matrix parsed from metadata. period', it is preserved as is (w/ possible value normalization). Usually it 'just some color data with not known camera/vendor specific meaning'.
That makes perfect sense. So, the matrix to convert from camera space to XYZ becomes xyz_rgb*rgb_cam. Does that impact the gamut of colors that can be represented as Camera Space -> sRGB is a narrowing conversion?
But, what exactly is ccm? I know in your earlier comment you mentioned
"ccm is also 'camera color matrix' retireved from RAW in 'as is' (excl. normalizing) form."
Alex,
A quick follow up. So based on your descriptions, cmatrix is the transformation from Camera Space to sRGB.
Is ccm similar to cam_xyz i.e. transformation between Camera Space and XYZ?
I have a DNG file shot from a Google Pixel 3 that has all-zeros for cam_xyz and ccm. The rgb_cam and cmatrix have the same values. The profile is also not available.
For such files, how do I convert from input space/camera space to XYZ?
I'm not sure, that margins are also multiple of 6 in 0.19. Extra effort is needed to analyze (source inspection, may be debugger session), so I won’t do it,
If not, xtrans[][] is for visible area, xtrans_abs[][] is for entire sensor (make sure you use LibRaw-provided margins).
0 is 'channel #0', 1 is 'channel #1', etc. Index to name mapping is in imgdata.idata.cdesc[] string.
LibRaw provided margins (left_margin, top_margin) are multiple of 6 (at least in 0.20 beta and in latest snapshots), so xtrans and xtrans_abs are the same.
Thanks for the quick reply. Then I think I will need to do heavy processing for useful data.
Can I ask your opinion on this matter:
-I am currently working on "image matching with stereo camera". I am looking for away to match the same pixel between left image and right image.
-With the 8 bit data in jpg, sometimes the matching will not be so correct, for example: (pixel of 112.02 will become 112) and (pixel of 111.8 will also become 112). Therefore, they might be considered as a match but it is not true.
Do you think by taking the raw data for processing I can get better matching result? or Raw data are just too noisy for the matching task?
Thanks
Short answer: No
RAW values unpacked from RAW file are:
- in linear gamma
- white balance not applied
- black level not subtracted (for most cases; some cameras do black subtraction).
yes, is_fuji_rotated() could be used to identify 'rotated processing' used in LibRaw/dcraw.c
COLOR(r,c) is correct in raw-coordinates (and FC(r,c) is correct in imgdata.image[] coordinates)
Hi Alex,
Is the API call is_fuji_rotated() the one to use to identify Super CCD sensors?
Also it looks like number of pixels in imgdata.rawdata.raw_image and number of pixels in imgdata.image seem to be consistent with raw_height/raw_width and height/width.
Dinesh
Alex,
I think I figured it out.
For Super-CCD sensors, there is a 45 degree rotation applied to the image. So the height/width fields in sizes report the bounding box that is needed to store this rotated image i.e. if you apply a 45 degree rotation on an image with dimensions 2177 rows and 2894 cols, u will need a buffer of size 3587 x 3588 pixels to store it. It will have a bunch of black pixels around the image. So, image contains height*width*4*2 bytes.
A few follow up questions I have are:
1. Does COLOR() function work for these types of images? Because for this specific image, cdesc is "RGBG" and COLOR(0, 0), COLOR(0, 1), COLOR(1, 0) and COLOR(1, 1) are 0, 2, 1, 3 respectively which results in a "RBGG" Bayer Pattern but I don't think that is correct because these are not really Bayer images?
2. Which fields/props in imgdata do I use to detect these kinds of files such that i can update my copying code suitably? The call to raw2image is not needed for typical Bayer or X-Trans sensors. You can copy from raw_image or color3_image etc.
Appreciate your help.
imgdata.image buffer is NOT equal to iheight*iwidth*2
So raw2image() works the same way for these files i.e. only one of the 4 channels will have a valid value and this will correspond to COLOR(r, c).
Is the size of the buffer (n bytes) allocated after raw2image() equal iheight*iwidth*2? This info is necessary for me to allocate the suitable output buffer in my code.
I think it will be faster and more productive to try raw2image() than discuss this here.
Alex,
This API call appears to only impact the output width/height. Do we treat this to be the visible dimensions (height/width) to just extract the visible portion of the RAW data?
Does this mean the COLOR(r, c) does not work for these formats?
To reiterate, what I am trying to do is extract the Visible Portion of the RAW Image and implement the pipeline myself? Will calling raw2image() on this image only give me the visible portion of the CFA? If so, what dimensions should I assume the resulting buffer has
So, is there a way to identify if the file is from a Super-CCD sensor such that I can branch my code to handle it separately.
Dinesh
https://www.libraw.org/docs/API-CXX.html#adjust_sizes_info_only
Alex,
Thanks for that. I will read up on Fuji Super CCD images and go over the dcraw code.
Does that mean that some of the properties in the sizes field such as width, height are incorrect/invalid for these files? I was just trying to just read out the visible part of the raw image for me to apply my own camera pipeline.
Also, how do I identify if a file has Super-CCD sensor?
This is the way how Fuji Super-CCD files are processed (this method borrowed from dcraw.c):
- raw data is not 2:3 but stretched in one dimension (different for different cameras)
- color order is not rg/gb, but green columns and r/b columns
On processing:
- entire image is rotated 45 deg to get normal bayer pattern with diagonal greens (it is also unstretched to get 2:3 aspect ratio)
- standard debayering is used
- output rotated back and cropped
Matrix conversion is not gamut limited (unless you cut negative values in some intermediate steps)
ccm is 'camera matrix parsed from metadata. period', it is preserved as is (w/ possible value normalization). Usually it 'just some color data with not known camera/vendor specific meaning'.
That makes perfect sense. So, the matrix to convert from camera space to XYZ becomes xyz_rgb*rgb_cam. Does that impact the gamut of colors that can be represented as Camera Space -> sRGB is a narrowing conversion?
But, what exactly is ccm? I know in your earlier comment you mentioned
"ccm is also 'camera color matrix' retireved from RAW in 'as is' (excl. normalizing) form."
but could you provide more details?
Thanks,
Dinesh
ccm is not similar to cmatrix.
To convert to XYZ use cmatrix or rgb_cam, multiplied by srgb2xyz conversion matrix, see convert_to_rgb() source for details.
Alex,
A quick follow up. So based on your descriptions, cmatrix is the transformation from Camera Space to sRGB.
Is ccm similar to cam_xyz i.e. transformation between Camera Space and XYZ?
I have a DNG file shot from a Google Pixel 3 that has all-zeros for cam_xyz and ccm. The rgb_cam and cmatrix have the same values. The profile is also not available.
For such files, how do I convert from input space/camera space to XYZ?
I'm not sure, that margins are also multiple of 6 in 0.19. Extra effort is needed to analyze (source inspection, may be debugger session), so I won’t do it,
If not, xtrans[][] is for visible area, xtrans_abs[][] is for entire sensor (make sure you use LibRaw-provided margins).
0 is 'channel #0', 1 is 'channel #1', etc. Index to name mapping is in imgdata.idata.cdesc[] string.
Alex,
Thanks for the clarification. I am currently on 0.19.5 and so wanted to know if the information about margins still holds.
Also, is my understanding of what xtrans represents i.e. 0 -> R, 1 -> G, 2->B correct?
LibRaw provided margins (left_margin, top_margin) are multiple of 6 (at least in 0.20 beta and in latest snapshots), so xtrans and xtrans_abs are the same.
I have had trouble with this for years
Not there
Not where if is suppose to be
Permission problems
Different versions
In the end it is so much easier to use a different compile
Like mingw
I think that matching pixel by exact values will not work well due to value differences (because of noise, for example).
Thanks for the quick reply. Then I think I will need to do heavy processing for useful data.
Can I ask your opinion on this matter:
-I am currently working on "image matching with stereo camera". I am looking for away to match the same pixel between left image and right image.
-With the 8 bit data in jpg, sometimes the matching will not be so correct, for example: (pixel of 112.02 will become 112) and (pixel of 111.8 will also become 112). Therefore, they might be considered as a match but it is not true.
Do you think by taking the raw data for processing I can get better matching result? or Raw data are just too noisy for the matching task?
Thanks
Ok. Now we are sure. Thanks for looking into it.
Thank you for the sample.
In this sample:
- there is no EXIF/GPS records
- all location information is contained in XMP block
So, LibRaw does not read it
I just sent you a link to the above address. The image was taken directly from the camera this time so it was not modified by exiftool.
LibRaw postprocessing is very similar to dcraw.c's one, so excellent 'dcraw annotated and outlined' site (https://ninedegreesbelow.com/files/dcraw-c-code-annotated-code.html ) may be useful.
Short answer: No
RAW values unpacked from RAW file are:
- in linear gamma
- white balance not applied
- black level not subtracted (for most cases; some cameras do black subtraction).
Pages