I can now decode images from Canon DSLRs that only show the active area. My test code now looks like this:
m_raw_proc->open_buffer(data, size);
m_raw_proc->unpack();
int pos = 0;
int top_margin = m_raw_proc->imgdata.sizes.top_margin;
int left_margin = m_raw_proc->imgdata.sizes.left_margin;
int raw_pitch = m_raw_proc->imgdata.sizes.raw_pitch / 2;
for (int r = 0; r < m_raw_proc->imgdata.sizes.height; r++)
{
for (int c = 0; c < m_raw_proc->imgdata.sizes.width; c++)
{
buffer[pos] = m_raw_proc->imgdata.rawdata.raw_image[(r + top_margin) * raw_pitch + left_margin + c];
pos++;
}
}
After this code has run, I have undebayered data in buffer[]. This works very well but I have no idea how to apply the in-camera white balance. I would be grateful if you would be able to explain how I apply a white balance or direct me to a resource that explains this. Indeed, is this even possible without using the dcraw functions?
Theoretical part:
1) Many (not all) digital cameras have 'masked' (opaque) pixel areas (or black frame), so image area is less than full sensor area. These black pixels may be used for black level calibration, banding suppression, etc (the area that may be used is specific for camera model, so we do not discuss it now).
2) imgdata.rawdata.raw_image[] array contains full sensor area decoded from RAW files. It need to be cropped on processing to exclude black frame.
There are several variables in imgdata.sizes that describes sensor area:
- raw_width, raw_height - full sensor size.
- raw_pitch - row pitch (in bytes! so divide it by 2 for raw_image, by 6 for color3_image) in rawdata.* pointers. Usually raw_pitch is just raw_width *2, but this not always so (eg. if file decoded via DNG SDK).
- top_margin, left_maring - pixel coordinates for top-left image visible area
- width, height - size of visible area (there is some special case for Fuji Super-CCD sensors used on very old cameras; let's drop it).
So, there are two ways to use:
A. Continue to use imgdata.rawdata.raw_image array w/o copying it into imgdata.image.
You'll need to change your all-pixel loops to something like (i'll skip some imgdata.sizes prefixes to shorten statements...)
// srow - source row, drow - dest row, same for col
B. Use LibRaw::raw2image() call:
This call will allocate imdata.image[..][4] array with 4-components per pixel.
After this call, 3 out of 4 components are zero, and only image[row*width+col][COLOR(row,col)] is not.
If you perform debayering in your own code, raw2image may be not optimal choice because of extra memory use. You may consider de-bayering in place (directly in imgdata.image[][4] array) to save memory.
Feel free to ask if you need additional explanations
My code works for Nikon DSLR cameras via imgdata.rawdata (albeit the images seem to be under-exposed) but images from Canon DSLRs are showing bad data near the top of each image. I've been reading various forum posts and it seems that I need to call raw2image() in order to get data that only contains visible (active pixels) pixels but I am struggling to get unbayered data.
It is my understanding that after calling raw2image() and without calling dcraw_process() I should have an undebayered dataset in imgdata.image[][] which only contains visible pixels, is that correct?
What I need to do is to copy data from imgdata.image[][] to a one dimensional 16-bit integer array (size is width * height) that gets saved as a FITS or a TIFF file which can then be debayered later by my post-processing application.
Here is a code example of what I am currently doing:
if ((ret = m_raw_proc->open_buffer(data, size)) != 0)
{
// Handle error
}
if ((ret = m_raw_proc->unpack() != 0))
{
// Handle error
}
if ((ret = m_raw_proc->raw2image()) != 0)
{
// Handle error
}
m_width = m_raw_proc->imgdata.sizes.iwidth;
m_height = m_raw_proc->imgdata.sizes.iheight;
for (int n = 0; n < m_width * m_height; n++)
{
buffer[n] = m_raw_proc->imgdata.image[n][0];
}
However, when I try to debayer the image in an external application, the code above is heavily bias to a specific primary colour. How do I correctly access the undebayered data from imgdata.image[][]?
Followup:
1) colors are in imdata.idata.cdesc string ('RGBG" in most cases, "CMYG" for some very old cameras, etc)
2) For RGBG (modern bayer), values returned by COLOR():
0 - Red
1 - Green
2 - Blue
3 - Green (for some cameras greens are different by black level/amplification/even color response)
Of course, this colors are repeatable in rows/columns (in 2x2 pattern for normal bayer and in 6x6 for X-Trans), so you do not need to call it on each pixel
Thank you Alex, this is exactly what I wanted. I can now get a raw undebayered buffer from LibRaw and display it on my image preview screen as a greyscale image. Of course, I see the bayer pattern on the preview display. Ultimately, I would like to be able to debayer the buffer using a nearest neighbour algorithm purely for the live preview screen and save a undebayered TIFF files.
This leads me onto my final question. Is it possible for LibRaw to return the camera bayer pattern, for example, RGGB, BGGR, GBGR or GRGB or is this not possible?
Original RAW data (after LibRaw::unpack) is available via imgdata.rawdata. arrays (raw_image - for bayer, X-Trans or monochrome, color3_image for 3-color non-bayer files, color4_image for 4-color ones; Only one of these pointers is non-NULL after unpack() call).
Note: this array(s) contains unpacked RAW pixels without any adustment/cropping:
- masked (black area) pixels are in place, pixel array size is imgdata.sizes.raw_width x raw_height
- black level not subtracted
Hi!
I found the 100 MP sensor is included in LibRaw supported camera list but for different camera models from Phase One. I was working with Phase One iXG 100 MP, and thought give a try how the output look like after default demosaicing. The output is more greenish (which is not correct) in output tiff ( dcraw_emu -v -dcbe -T input.IIQ ).
Any good suggestion on this unsupported camera issue? I have attached the output.
Beyond the core library, you also provide a few command line tools with your installation package.
Could you add support for reporting header details per included raw image if a file contains more than one frame? I hope this way I can find a way to discover if there are differences in the attributes of both contained images, to tell apart "High Resolution" from "Dynamic Range" dual capture raw image files by comparing their attributes.
I believe hardly any software at all is aware of raw image files possibly containing more than one image frame.
from RGB to sRGB is a two step conversation process, first RGB to XYZ,as you mentioned. But as it is a linear conversation process the conversion matrix (3*3) should be appropriate for the camera model and lighting condition (e.g. D65). from XYZ to sRGB could be another conversation matrix.
Thanks Alex,
I can now decode images from Canon DSLRs that only show the active area. My test code now looks like this:
After this code has run, I have undebayered data in buffer[]. This works very well but I have no idea how to apply the in-camera white balance. I would be grateful if you would be able to explain how I apply a white balance or direct me to a resource that explains this. Indeed, is this even possible without using the dcraw functions?
Once again, I thank you for your continued help.
Amanda
Theoretical part:
1) Many (not all) digital cameras have 'masked' (opaque) pixel areas (or black frame), so image area is less than full sensor area. These black pixels may be used for black level calibration, banding suppression, etc (the area that may be used is specific for camera model, so we do not discuss it now).
2) imgdata.rawdata.raw_image[] array contains full sensor area decoded from RAW files. It need to be cropped on processing to exclude black frame.
There are several variables in imgdata.sizes that describes sensor area:
- raw_width, raw_height - full sensor size.
- raw_pitch - row pitch (in bytes! so divide it by 2 for raw_image, by 6 for color3_image) in rawdata.* pointers. Usually raw_pitch is just raw_width *2, but this not always so (eg. if file decoded via DNG SDK).
- top_margin, left_maring - pixel coordinates for top-left image visible area
- width, height - size of visible area (there is some special case for Fuji Super-CCD sensors used on very old cameras; let's drop it).
So, there are two ways to use:
A. Continue to use imgdata.rawdata.raw_image array w/o copying it into imgdata.image.
You'll need to change your all-pixel loops to something like (i'll skip some imgdata.sizes prefixes to shorten statements...)
// srow - source row, drow - dest row, same for col
for(srow = imgdata.sizes.top_margin, drow=0; srow <= top_margin + height; srow++, drow++)
for(scol = imgdata.sizes.left_margin, dcol=0; scol <= left_margin+width....
buffer[srow * width + scol] = imgdata.rawdata.raw_image[drow*raw_pitch/2 + dcol];
B. Use LibRaw::raw2image() call:
This call will allocate imdata.image[..][4] array with 4-components per pixel.
After this call, 3 out of 4 components are zero, and only image[row*width+col][COLOR(row,col)] is not.
If you perform debayering in your own code, raw2image may be not optimal choice because of extra memory use. You may consider de-bayering in place (directly in imgdata.image[][4] array) to save memory.
Feel free to ask if you need additional explanations
Further to my original post.
My code works for Nikon DSLR cameras via imgdata.rawdata (albeit the images seem to be under-exposed) but images from Canon DSLRs are showing bad data near the top of each image. I've been reading various forum posts and it seems that I need to call raw2image() in order to get data that only contains visible (active pixels) pixels but I am struggling to get unbayered data.
It is my understanding that after calling raw2image() and without calling dcraw_process() I should have an undebayered dataset in imgdata.image[][] which only contains visible pixels, is that correct?
What I need to do is to copy data from imgdata.image[][] to a one dimensional 16-bit integer array (size is width * height) that gets saved as a FITS or a TIFF file which can then be debayered later by my post-processing application.
Here is a code example of what I am currently doing:
However, when I try to debayer the image in an external application, the code above is heavily bias to a specific primary colour. How do I correctly access the undebayered data from imgdata.image[][]?
Thanks
Amanda
Thanks Alex,
I now have my application working perfectly.
Amanda
Followup:
1) colors are in imdata.idata.cdesc string ('RGBG" in most cases, "CMYG" for some very old cameras, etc)
2) For RGBG (modern bayer), values returned by COLOR():
0 - Red
1 - Green
2 - Blue
3 - Green (for some cameras greens are different by black level/amplification/even color response)
https://www.libraw.org/docs/API-CXX.html#COLOR will return pixel color for (row,col).
Of course, this colors are repeatable in rows/columns (in 2x2 pattern for normal bayer and in 6x6 for X-Trans), so you do not need to call it on each pixel
Thank you Alex, this is exactly what I wanted. I can now get a raw undebayered buffer from LibRaw and display it on my image preview screen as a greyscale image. Of course, I see the bayer pattern on the preview display. Ultimately, I would like to be able to debayer the buffer using a nearest neighbour algorithm purely for the live preview screen and save a undebayered TIFF files.
This leads me onto my final question. Is it possible for LibRaw to return the camera bayer pattern, for example, RGGB, BGGR, GBGR or GRGB or is this not possible?
Once again, thank you.
Amanda
Thank you very much!
dcraw_emu sample supports both TIFF output and cropping.
Original RAW data (after LibRaw::unpack) is available via imgdata.rawdata. arrays (raw_image - for bayer, X-Trans or monochrome, color3_image for 3-color non-bayer files, color4_image for 4-color ones; Only one of these pointers is non-NULL after unpack() call).
Note: this array(s) contains unpacked RAW pixels without any adustment/cropping:
- masked (black area) pixels are in place, pixel array size is imgdata.sizes.raw_width x raw_height
- black level not subtracted
Please wait until public snapshot.
Is the code with CR3 support available in some private branch? I would like to test if it is available.
Could you please provide image sample?
Answered in another sub-thread: https://www.libraw.org/comment/5396#comment-5396
Dear Sir:
could you please share image sample (e.g. upload it to Dropbox/WeTransfer/Mega.NZ/etc and send link to info@libraw.org)
Also, as shot/automatic white balance (-w or -a switch for dcraw_emu) most likely will work.
Hi!
I found the 100 MP sensor is included in LibRaw supported camera list but for different camera models from Phase One. I was working with Phase One iXG 100 MP, and thought give a try how the output look like after default demosaicing. The output is more greenish (which is not correct) in output tiff ( dcraw_emu -v -dcbe -T input.IIQ ).
Any good suggestion on this unsupported camera issue? I have attached the output.
In next public snapshot (this Fall)
Hi,
any idea of when you plan to suport these cameras ?
Thank you.
Best Regards,
Well, one more question...
Beyond the core library, you also provide a few command line tools with your installation package.
Could you add support for reporting header details per included raw image if a file contains more than one frame? I hope this way I can find a way to discover if there are differences in the attributes of both contained images, to tell apart "High Resolution" from "Dynamic Range" dual capture raw image files by comparing their attributes.
I believe hardly any software at all is aware of raw image files possibly containing more than one image frame.
Thanks, problem soved.
Thanks Alex, it's working smoothly what I needed. unprocessed_raw -T Image.CR2
There is no direct alternative to dcraw -d/-D in LibRaw's dcraw_emu
Use unprocessed_raw and/or 4channels samples to dump unaltered raw data.
Usually a matrix link is calculated first, to keep things happening in one step.
from RGB to sRGB is a two step conversation process, first RGB to XYZ,as you mentioned. But as it is a linear conversation process the conversion matrix (3*3) should be appropriate for the camera model and lighting condition (e.g. D65). from XYZ to sRGB could be another conversation matrix.
CR3 support is expected in next public snapshot this Fall
Pages