Right, sorry Alex, I forgot those steps that you explained them in the documentation of the C++ API.
Nonetheless I don't understand what is "imgdata.colordata.black" , it is defined in the documentation as just "unsigned black", but "unsigned what??" is that another way to say void*? (a pointer to any type?). Or is "black" an actual structure? So, i'm confused with all the "unsigned" not followed by a usual type (like in "unsigned int") but by "black" , "maximum" etc...
I only understand that calling subtract_black() will subtrack black level "values" and black level "data", yet I don't understand the difference between "black level values" and "black level data", as written in http://www.libraw.org/docs/API-CXX-eng.html#subtract_black.
Could you shed some light on this "dark-ness"... ?
Indeed, my app is for astrophotography, we do take so-called stacks of "dark images" and subtract them from the raw images. I wonder if using black, cblack... would be redundant, or even worse, inconsistent with using calibration dark images taken ourselves. Understanding this dark, cblack, levels, data, ... will help to decide what I shall (not) use.
General raw processing steps are:
- black level subtraction
- white balance (and, possibly, data scaling: if you use integer processing it is better to use full range)
- demosaic
- color conversion to output color space
- gamma curve.
I multiplied my channels as advised (divided by green multiplier, and not touching the green channel), but it's far from the white balancing that I had with dcraw_process().
After unpack(), besides the demosaicing, is there anything I have to do before/after my white balancing to have something close to dcraw_process()?
'cause If not, then I guess the opencv algorithm for demosaicing is such that the white balancing with the camera white balance multiplier does not apply equally as it did with dcraw_process().
Indeed, I saw Rawspeed was not maintained as regularly as you maintain Libraw, and that's why i've been reluctant to implement Rawspeed. I can deal with the Libraw without Rawspeed, it's good enough.
Yes, RawSpeed is faster for huffman-compressed formats (Canon CR2, Nikon NEF, some others).
Please note: LibRaw is tested with RawSpeed's 'master' branch (https://github.com/klauspost/rawspeed/tree/master) which is very old, last updated in May 2014.
Up-to-date 'develop' branch is untested, not sure that LibRaw will work with.
My pseudo-code above is correct if you use sizes.raw_pitch
Thanks for the clarification. My bottleneck is not there for now (I don't know anything about SSE or AVX...). I multiply the array by the r and b multiplier with openCL with openCV.
Regarding Rawspeed, would that change unpack() speed? At the moment, dealing with >20MP images, It takes >1s to unpack. But < 0.2s to demosaic with openCV + white balance. Ideally I'd like to make unpack() faster, < 1s.
Can I make unpack() faster with Rawspeed? How would Rawspeed affect your above pseudo-codes for accessing the visible pixel?
So, here, if I choose to divide by, say, 1024 (green multiplier), then I do not multiply the green channel at all, and multiply the other by their multiplier value divided by 1024, is that how it goes?
Ok, in imgdata.color.cam_mul , I read all 4 values above a thousand: 1862, 1024, 1833, 1024
I'm confused with the scaling. Shouldn't I normalize this somehow? I'm used to values of 1.xxx, not values of the order 10^3. What am I missing?
These coefficients are stored in imgdata.color.cam_mul[4]
Depending of camera used, 4th coefficient ('second green') may be
- zero (use cam_mul[2] for second green or assume two greens in 2x2 matrix are same)
- same as cam_mul[2] (OK)
- different from cam_mul[2] (really 4 color camera, like old Olympus E-5xx series or Sony F-828, or CMYG camera like Nikon E5700 or Canon G1)
For 4-color cameras you need to use demosaic method suited for this case (4 color in 2x2 matrix, not two similar G and one R and one B).
After this I have yet to apply the white balancing. Before I did the above, when using dcraw_process, I was satisfied with the white balancing when using: rawProcess.imgdata.params.use_camera_wb = 1;
How may I get the exact same coefficients to apply them to my demosaiced RGB channels?
Each row contains not visible pixels on both ends:
- on left (0.... left_margin-1)
- and on right (from visible area end to row end)
Also, rows may be aligned for efficient SSE/AVX access. LibRaw internal code do not align rows, but if you use LibRaw+RawSpeed, RawSpeed will align rows on 16 bytes.
So, it is better to use imgdata.sizes.raw_pitch (it is in bytes, so divide /2 for bayer data) instead of raw_width.
After unpack(), I'm trying to use the raw_image as an input to opencv demosaicing (with cvtColor and type CV_BayerGB2RGB), and thus avoid the use of dcraw_process for demosaicing. opencv here requires a 1-channel data image. Yet when I use the raw data after unpac(), starting from the first visible pixel, it gives me some rubbish image.
As far as you know at least on the Libraw's side, am I getting the raw CFA data wrong? See below:
int raw_width = (int) rawProcess.imgdata.sizes.raw_width;
int top_margin = (int) rawProcess.imgdata.sizes.top_margin;
int left_margin = (int) rawProcess.imgdata.sizes.left_margin;
int first_visible_pixel = (int) (raw_width * top_margin + left_margin);
cv::Mat imRaw(naxis2, naxis1, CV_16UC1);
ushort *rawP = imRaw.ptr<ushort>(0);
for (int i = 0; i < nPixels; i++)
{
rawP[i] = (ushort) rawProcess.imgdata.rawdata.raw_image[i+first_visible_pixel];
}
cv::Mat demosaiced16;
cvtColor(imRaw, demosaiced16, CV_BayerGB2RGB);
Above, imRaw.ptr is the pointer to the data buffer in the cv::Mat object where I want to copy my raw_image data.
I know they can be vendor specific but I wondered if you would take patches for decoding such information?
For example to get access to the Canon data, parse_makernote() would need code around line 8686 something like:
else if (tag == 0x0098) // CropInfo
{
unsigned short CropLeft = get2();
unsigned short CropRight = get2();
unsigned short CropTop = get2();
unsigned short CropBottom = get2();
fprintf (stderr, " Cropinfo %d %d %d %d\n", CropLeft, CropRight, CropTop, CropBottom);
}
else if (tag == 0x009a) // AspectInfo
{
unsigned int ratio = get4();
unsigned int CroppedWidth = get4();
unsigned int CroppedHeight = get4();
unsigned int CropLeft = get4();
unsigned int CropTop = get4();
fprintf (stderr, " AspectInfo %d %d %d %d %d\n", ratio, CroppedWidth, CroppedHeight,
CropLeft, CropTop);
}
}
Obviously this would need storing somewhere useful rather than simply printing it out, but that could then be passed back to the real crop functionality
I've managed creating dynamically loaded libraries of LibRaw for Mac, Win32 and Win64 now. The only thing that remains are dynamic libraries for Linux32 and Linux64. I would very much appreciate your help with this. When running the make file I end up with .a, .la and .lib files only.
Right, sorry Alex, I forgot those steps that you explained them in the documentation of the C++ API.
Nonetheless I don't understand what is "imgdata.colordata.black" , it is defined in the documentation as just "unsigned black", but "unsigned what??" is that another way to say void*? (a pointer to any type?). Or is "black" an actual structure? So, i'm confused with all the "unsigned" not followed by a usual type (like in "unsigned int") but by "black" , "maximum" etc...
I only understand that calling subtract_black() will subtrack black level "values" and black level "data", yet I don't understand the difference between "black level values" and "black level data", as written in http://www.libraw.org/docs/API-CXX-eng.html#subtract_black.
Could you shed some light on this "dark-ness"... ?
Indeed, my app is for astrophotography, we do take so-called stacks of "dark images" and subtract them from the raw images. I wonder if using black, cblack... would be redundant, or even worse, inconsistent with using calibration dark images taken ourselves. Understanding this dark, cblack, levels, data, ... will help to decide what I shall (not) use.
And merry christmas to you!! (it's Dec, 25th...)
Followup:
General raw processing steps are:
- black level subtraction
- white balance (and, possibly, data scaling: if you use integer processing it is better to use full range)
- demosaic
- color conversion to output color space
- gamma curve.
I guess, you've forget to subtract black level?
As a 1st step, use
imgdata.colordata.black - base level
imgdata.colordata.cblack[0..3] - per-channel additions
This should be done before white balance.
Back to the white balancing:
I multiplied my channels as advised (divided by green multiplier, and not touching the green channel), but it's far from the white balancing that I had with dcraw_process().
After unpack(), besides the demosaicing, is there anything I have to do before/after my white balancing to have something close to dcraw_process()?
'cause If not, then I guess the opencv algorithm for demosaicing is such that the white balancing with the camera white balance multiplier does not apply equally as it did with dcraw_process().
Indeed, I saw Rawspeed was not maintained as regularly as you maintain Libraw, and that's why i've been reluctant to implement Rawspeed. I can deal with the Libraw without Rawspeed, it's good enough.
Yes, RawSpeed is faster for huffman-compressed formats (Canon CR2, Nikon NEF, some others).
Please note: LibRaw is tested with RawSpeed's 'master' branch (https://github.com/klauspost/rawspeed/tree/master) which is very old, last updated in May 2014.
Up-to-date 'develop' branch is untested, not sure that LibRaw will work with.
My pseudo-code above is correct if you use sizes.raw_pitch
Thanks for the clarification. My bottleneck is not there for now (I don't know anything about SSE or AVX...). I multiply the array by the r and b multiplier with openCL with openCV.
Regarding Rawspeed, would that change unpack() speed? At the moment, dealing with >20MP images, It takes >1s to unpack. But < 0.2s to demosaic with openCV + white balance. Ideally I'd like to make unpack() faster, < 1s.
Can I make unpack() faster with Rawspeed? How would Rawspeed affect your above pseudo-codes for accessing the visible pixel?
Yes, usually green multiplier is set to 1.0 (green is most strong channel unless very warm light is used), so you may scale only red/blue channels.
BTW, if SSE or AVX (vectorized) instructions are used to apply WB, it is cheaper to multiply green to 1.0 than split-multiply-join the pixel data.
So, here, if I choose to divide by, say, 1024 (green multiplier), then I do not multiply the green channel at all, and multiply the other by their multiplier value divided by 1024, is that how it goes?
These values are read from camera metadata and not altered.
Just normalize it: divide to smallest non-zero (or to green multiplier for RGB cameras).
Ok, in imgdata.color.cam_mul , I read all 4 values above a thousand: 1862, 1024, 1833, 1024
I'm confused with the scaling. Shouldn't I normalize this somehow? I'm used to values of 1.xxx, not values of the order 10^3. What am I missing?
These coefficients are stored in imgdata.color.cam_mul[4]
Depending of camera used, 4th coefficient ('second green') may be
- zero (use cam_mul[2] for second green or assume two greens in 2x2 matrix are same)
- same as cam_mul[2] (OK)
- different from cam_mul[2] (really 4 color camera, like old Olympus E-5xx series or Sony F-828, or CMYG camera like Nikon E5700 or Canon G1)
For 4-color cameras you need to use demosaic method suited for this case (4 color in 2x2 matrix, not two similar G and one R and one B).
It's working, thank you.
After this I have yet to apply the white balancing. Before I did the above, when using dcraw_process, I was satisfied with the white balancing when using:
rawProcess.imgdata.params.use_camera_wb = 1;
How may I get the exact same coefficients to apply them to my demosaiced RGB channels?
Each row contains not visible pixels on both ends:
- on left (0.... left_margin-1)
- and on right (from visible area end to row end)
Also, rows may be aligned for efficient SSE/AVX access. LibRaw internal code do not align rows, but if you use LibRaw+RawSpeed, RawSpeed will align rows on 16 bytes.
So, it is better to use imgdata.sizes.raw_pitch (it is in bytes, so divide /2 for bayer data) instead of raw_width.
So, right (pseudo)code is something like this:
Add your data object name before imgdata to get correct code
After unpack(), I'm trying to use the raw_image as an input to opencv demosaicing (with cvtColor and type CV_BayerGB2RGB), and thus avoid the use of dcraw_process for demosaicing. opencv here requires a 1-channel data image. Yet when I use the raw data after unpac(), starting from the first visible pixel, it gives me some rubbish image.
As far as you know at least on the Libraw's side, am I getting the raw CFA data wrong? See below:
Above, imRaw.ptr is the pointer to the data buffer in the cv::Mat object where I want to copy my raw_image data.
The expected image (which I get after dcraw_process) is:
https://www.dropbox.com/s/unn695en6hpdr3j/dcraw_process.jpg?dl=0
And instead, I have this:
https://www.dropbox.com/s/c1f8s3fjgqy0tit/Failed_demosaic.jpg?dl=0
I know they can be vendor specific but I wondered if you would take patches for decoding such information?
For example to get access to the Canon data, parse_makernote() would need code around line 8686 something like:
else if (tag == 0x0098) // CropInfo
{
unsigned short CropLeft = get2();
unsigned short CropRight = get2();
unsigned short CropTop = get2();
unsigned short CropBottom = get2();
fprintf (stderr, " Cropinfo %d %d %d %d\n", CropLeft, CropRight, CropTop, CropBottom);
}
else if (tag == 0x009a) // AspectInfo
{
unsigned int ratio = get4();
unsigned int CroppedWidth = get4();
unsigned int CroppedHeight = get4();
unsigned int CropLeft = get4();
unsigned int CropTop = get4();
fprintf (stderr, " AspectInfo %d %d %d %d %d\n", ratio, CroppedWidth, CroppedHeight,
CropLeft, CropTop);
}
}
Obviously this would need storing somewhere useful rather than simply printing it out, but that could then be passed back to the real crop functionality
Kevin
http://cpansearch.perl.org/src/EXIFTOOL/Image-ExifTool-9.90/html/TagName...
thanks
1st google result for lensfun is http://lensfun.sourceforge.net/
Sorry to say that I am new in this field.
what is this lensfun?
No.
Use lensfun.
Sorry, I know nothing about Linux shared libraries creation.
./configure should work, meanwhile.
It's a huge makefile, any tip on what to change there?
yes, you need to change makefile, or use ./configure stuff.
I've managed creating dynamically loaded libraries of LibRaw for Mac, Win32 and Win64 now. The only thing that remains are dynamic libraries for Linux32 and Linux64. I would very much appreciate your help with this. When running the make file I end up with .a, .la and .lib files only.
LibRaw (C++ API) do not have init() method. So, I do not know what you call.
Pages