As written on my pictures, I put either:
a copy of all 4 cam_mul values in user_mul
a copy of all 4 cam_mul values in pre_mul
I found some LibRaw code (scale colors) where either user_mul or cam_mul are being copied to pre_mul, which seem to confirm what I thought...
I just cannot understand why my pictures are different... I'll have to re-check...
Bayer images contains only one color channel information per pixel (each pixel is red or green or blue).
To get full-colored image from raw bayer data one need to
1) decode bayer (one component per pixe) data
2) do demosaic (de-bayer) so:
- apply white balance
- interpolate color data for missing color values
- convert to output color space
- do gamma correction.
In LibRaw this is made by this call sequence:
open_file() ; // read image metadata
unpack(); // decode bayer data
dcraw_process(); // white balance, color interpolation, color space conversion
dcraw_make_mem_image(); // gamma correction, image rotation, 3-component RGB bitmap creation
I had finally some rawdatas but, it wasn't what i expected. It seems that i have a monochromous picture only raw_image is filled how can i acces the raw data without processes ?
Because i've always worked with ppm and pgm pictures, for that projecti needd to use my .NEF in order to make some deep image processing on it with OpenCV. I'd like to get frow libraw a Matrix (Height x Width x Channels (here 3)) to put it in OpenCV and start to process the images.
if (O.ca_correc >0 ) {cablue=O.cablue; cared=O.cared; CA_correct_RT(cablue, cared);}
if (O.cfaline >0 ) {linenoise=O.linenoise; cfa_linedn(linenoise);}
if (O.cfa_clean >0 ) {lclean=O.lclean; cclean=O.cclean; cfa_impulse_gauss(lclean,cclean);}
CA_correct_RT() - is for demosaic pack GPL3, backported code from RawTherapee, chromatic abberration corrector.
cfs_linedn() - some kind of de-banding by high frequency filtering, again from Emil Martinec/RawTherapee/GPL3
cfa_impulse_gauss() - out of range pixels cleaning, same source/copyright
All three was contributed by RawTherapee in 2011.
LibRaw contribution policy is very simple
1) we accept *all* contributed code
2) the default settings are 'not used'
3) no extensive testing, just test that output image is not completely damaged
The same true for dcraw_process() itself.
Our "mission" is to decode RAW data and metadata. We're happy if user (developer) interaction with LibRaw ends just after unpack()/unpack_thumb() calls and all postprocessing is done by calling application. Standard postprocessing is very similar to dcraw (as dcraw_process() implies), it is not fast and not high quality.
What about the cfaline / linenoise parameters, 1st block, with:
int cfaline; float linenoise;
Line noise (banding) reduction.
positive value turns this feature on (default: off).
linenoise - amount of reduction. Useful range is 0.001 to 0.02. Default value is 0.0
It's used for the same purpose apparently. Is that directly coming from dcraw? You didn't play with it either? Any idea if that's meant to affect things for unpack() or for raw2image()/dcraw_process()/dcraw_make_mem_image()?
I'll try to reverse-engineer this a bit but would be great to have any additional information on what your source code does with this if you have it hanging around. The code is quite... dense on that level!
wf_debanding() is contributed to LibRaw by one of our users.
I've never used it in real processing, just several experiments several years ago. So, try to play with parameters yourself
Made some progress thanks to your explanations. I can play with the color channels and display the results of some personal post-processing. Very nice.
I wanted to ask you some more details on what "banding / debanding" does? In your documentation, I read this for Structure libraw_rawdata_t: holds unpacked RAW data:
int cfaline; float linenoise;
Line noise (banding) reduction.
positive value turns this feature on (default: off).
linenoise - amount of reduction. Useful range is 0.001 to 0.02. Default value is 0.0 i.e. not clean anything.
(...)
int wf_debanding; float wf_deband_treshold[4];
wf_debanding: 1 turns on banding suppression (slow!), 0 turns it off.
wf_deband_treshold[] - per channel debanding thresholds.
I'm reading some general stuff about banding in general, I understand the noise pattern that is targeted. Could you explain in which respect the first parameter (1st block) and the 2nd ones (2nd block) affect the image? They both deal with "banding", so it is unclear what each does.
In addition, do they affect the raw image right after unpack(), or does one, or both, affect only the post-processed one in imgdata.image? (after either raw2image() or dcraw_process())
Will start implementing Libraw functions in my project (still at a very early, poorly documented and modest stage).
You can currently see it as "QtFits" in github: https://github.com/raphaelattie/QtFits.git
Dealing so far with FITS files, the name of the future app will of course change after implementing Libraw-dependent classes, thanks to which I will not just handle FITS files anymore.
dcraw_make_mem_image() just copies values from image[] to separate memory array with
- gamma curve
- 16 to 8 bit correction (if requested; this is default)
- rotation.
I think, I need to describe processing stages in LibRaw (simplified case, bayer image)
1) open_file() - reads metadata (EXIF and makernotes)
2) unpack() - decodes file contents into imgdata.rawdata.raw_image.
COLOR() call is useful after that: to know what color has pixel at (row,col).
3) dcraw_process():
- do raw2image() internally, allocate imgdata.image[] and populate
imgdata.image[row*width+col[COLOR(row,col)] = rawdata.raw_image[(row+top)*raw_width+col+left]
- do white balance
- than bayer interpolation
- and other possible postprocessing such as denoise or highlight recovery
- than output color conversion and data scale
After that, image[row*width+col] has [0..2] components filled with RGB values and something in [3]
4) dcraw_make_mem_image() may be used to create 3-component bitmap (with gamma correction), in 8- or 16-bit per component to be written into TIFF/JPEG or displayed on screen..
That's all that simple :)
You may repeat steps 3 and 4 with different imgdata.params settings to get different renderings
Can you confirm the following code, for the usage of COLOR():
In image after dcraw_processed, to get the color of pixel at, e.g., (10, 3):
int row = 3;
int col = 10;
int iwidth = rawProcess.imgdata.sizes.iwidth;
int color value = rawProcess.imgdata.image[iwidth*row + col][COLOR(row, col)];
The row and col variables in image[iwidth*row + col] are the same expected in COLOR(row, col), is that correct?
To extract one channel only, there is several ways
1) extract it from raw_image: find 1-st green component (using COLOR) coordinate in (0,0)-(1,1) square and than go from pixel to pixel with +2 increment in both directions
2) do raw2image with params.half_size
It will create half-sized image[] array with all 4 components are non-zero (because each bayer 2x2 square will go into one image[] pixel)
Than use [1] plane from image
Ok, I see dcraw_make_mem_image() in documentation, that's good, I will use it.
I might need to do coaligmnent of series of images with just, say, their green component.
So I need to just extract a bitmap of just one of the three RGB components of the demosaiced image, what function do you recommend to use?
color value = image[pixel number][COLOR(row,col)], indeed, but it is not intended to use this way.
If one uses dcraw_process(), he will get image[] with interpolated colors after this call.
If one uses own processing, he hardly need 4-component image[] array.
raw2image is *compatibility layer* for some programs created to use with Libraw pre-0.15 (separate raw_image and image was introduced in version 0.15).
As written on my pictures, I put either:
a copy of all 4 cam_mul values in user_mul
a copy of all 4 cam_mul values in pre_mul
I found some LibRaw code (scale colors) where either user_mul or cam_mul are being copied to pre_mul, which seem to confirm what I thought...
I just cannot understand why my pictures are different... I'll have to re-check...
What values you use in user_mul ?
rgb_cam is [3][4] matrix, while pre_mul are [4] WB multipliers.
Am I right to think that rgb_cam is filled with the values of pre_mul?
Thanks,
Sylvain.
Bayer images contains only one color channel information per pixel (each pixel is red or green or blue).
To get full-colored image from raw bayer data one need to
1) decode bayer (one component per pixe) data
2) do demosaic (de-bayer) so:
- apply white balance
- interpolate color data for missing color values
- convert to output color space
- do gamma correction.
In LibRaw this is made by this call sequence:
open_file() ; // read image metadata
unpack(); // decode bayer data
dcraw_process(); // white balance, color interpolation, color space conversion
dcraw_make_mem_image(); // gamma correction, image rotation, 3-component RGB bitmap creation
Thanks,
I had finally some rawdatas but, it wasn't what i expected. It seems that i have a monochromous picture only raw_image is filled how can i acces the raw data without processes ?
Because i've always worked with ppm and pgm pictures, for that projecti needd to use my .NEF in order to make some deep image processing on it with OpenCV. I'd like to get frow libraw a Matrix (Height x Width x Channels (here 3)) to put it in OpenCV and start to process the images.
Really thanks again !
Bdaniel
Alright Alex, thank you for these precisions, they are very helpful.
quote from libraw_cxx:
CA_correct_RT() - is for demosaic pack GPL3, backported code from RawTherapee, chromatic abberration corrector.
cfs_linedn() - some kind of de-banding by high frequency filtering, again from Emil Martinec/RawTherapee/GPL3
cfa_impulse_gauss() - out of range pixels cleaning, same source/copyright
All three was contributed by RawTherapee in 2011.
LibRaw contribution policy is very simple
1) we accept *all* contributed code
2) the default settings are 'not used'
3) no extensive testing, just test that output image is not completely damaged
The same true for dcraw_process() itself.
Our "mission" is to decode RAW data and metadata. We're happy if user (developer) interaction with LibRaw ends just after unpack()/unpack_thumb() calls and all postprocessing is done by calling application. Standard postprocessing is very similar to dcraw (as dcraw_process() implies), it is not fast and not high quality.
Ok, that addresses wf_debanding(), thanks.
What about the cfaline / linenoise parameters, 1st block, with:
int cfaline; float linenoise;
Line noise (banding) reduction.
positive value turns this feature on (default: off).
linenoise - amount of reduction. Useful range is 0.001 to 0.02. Default value is 0.0
It's used for the same purpose apparently. Is that directly coming from dcraw? You didn't play with it either? Any idea if that's meant to affect things for unpack() or for raw2image()/dcraw_process()/dcraw_make_mem_image()?
I'll try to reverse-engineer this a bit but would be great to have any additional information on what your source code does with this if you have it hanging around. The code is quite... dense on that level!
wf_debanding() is contributed to LibRaw by one of our users.
I've never used it in real processing, just several experiments several years ago. So, try to play with parameters yourself
Hi there,
Made some progress thanks to your explanations. I can play with the color channels and display the results of some personal post-processing. Very nice.
I wanted to ask you some more details on what "banding / debanding" does? In your documentation, I read this for
Structure libraw_rawdata_t
: holds unpacked RAW data:int cfaline; float linenoise;
Line noise (banding) reduction.
positive value turns this feature on (default: off).
linenoise - amount of reduction. Useful range is 0.001 to 0.02. Default value is 0.0 i.e. not clean anything.
(...)
int wf_debanding; float wf_deband_treshold[4];
wf_debanding: 1 turns on banding suppression (slow!), 0 turns it off.
wf_deband_treshold[] - per channel debanding thresholds.
I'm reading some general stuff about banding in general, I understand the noise pattern that is targeted. Could you explain in which respect the first parameter (1st block) and the 2nd ones (2nd block) affect the image? They both deal with "banding", so it is unclear what each does.
In addition, do they affect the raw image right after unpack(), or does one, or both, affect only the post-processed one in imgdata.image? (after either raw2image() or dcraw_process())
Thanks
Do you use bayer-matrix camera to test with?
If not, rawdata.raw_image will be NULL, while rawdata.color3_image or rawdata.color4_image is not
All clear! :-)
Will start implementing Libraw functions in my project (still at a very early, poorly documented and modest stage).
You can currently see it as "QtFits" in github: https://github.com/raphaelattie/QtFits.git
Dealing so far with FITS files, the name of the future app will of course change after implementing Libraw-dependent classes, thanks to which I will not just handle FITS files anymore.
Thanks
dcraw_make_mem_image() just copies values from image[] to separate memory array with
- gamma curve
- 16 to 8 bit correction (if requested; this is default)
- rotation.
It is that simple :)
UPD: so, yes, use it after dcraw_process()
That's good.
Does dcraw_make_mem_image() also perform:
- do white balance
- than bayer interpolation
- and other possible postprocessing such as denoise or highlight recovery
as dcraw_process()?
Just to be clear:
Do i invoke dcraw_make_mem_image() INSTEAD OF dcraw_process() or AFTER dcraw_process()?
I think, I need to describe processing stages in LibRaw (simplified case, bayer image)
1) open_file() - reads metadata (EXIF and makernotes)
2) unpack() - decodes file contents into imgdata.rawdata.raw_image.
COLOR() call is useful after that: to know what color has pixel at (row,col).
3) dcraw_process():
- do raw2image() internally, allocate imgdata.image[] and populate
imgdata.image[row*width+col[COLOR(row,col)] = rawdata.raw_image[(row+top)*raw_width+col+left]
- do white balance
- than bayer interpolation
- and other possible postprocessing such as denoise or highlight recovery
- than output color conversion and data scale
After that, image[row*width+col] has [0..2] components filled with RGB values and something in [3]
4) dcraw_make_mem_image() may be used to create 3-component bitmap (with gamma correction), in 8- or 16-bit per component to be written into TIFF/JPEG or displayed on screen..
That's all that simple :)
You may repeat steps 3 and 4 with different imgdata.params settings to get different renderings
If image has dcraw_processed() (so, bayer data interpolated), you possibly do not need COLOR():
image[row*iwidth+col][0..3] will contain Red, Green, Blue in 0..2 and some garbage (not-interpolated G2 or zero) in [3]
See below instead, reply fields got too narrow...
Can you confirm the following code, for the usage of COLOR():
In image after dcraw_processed, to get the color of pixel at, e.g., (10, 3):
The
row
andcol
variables inimage[iwidth*row + col]
are the same expected inCOLOR(row, col)
, is that correct?Excellent, useful information! Thanks! Will post soon on the other forum topic the github link for the software (opensource of course)
Raphael
To extract one channel only, there is several ways
1) extract it from raw_image: find 1-st green component (using COLOR) coordinate in (0,0)-(1,1) square and than go from pixel to pixel with +2 increment in both directions
2) do raw2image with params.half_size
It will create half-sized image[] array with all 4 components are non-zero (because each bayer 2x2 square will go into one image[] pixel)
Than use [1] plane from image
Ok, I see dcraw_make_mem_image() in documentation, that's good, I will use it.
I might need to do coaligmnent of series of images with just, say, their green component.
So I need to just extract a bitmap of just one of the three RGB components of the demosaiced image, what function do you recommend to use?
This is demosaiced, white balanced, brightness adjusted, not rotated (!) image with linear gamma.
If you wish to get RGB bitmap (8 or 16 bit) with gamma applied and without extra (4th) component. use dcraw_make_mem_image() call.
After dcraw_process, image[][] contains demosaiced data? only 3-values RGB and not 4-values-RGBG, is that correct?
After dcraw_process(), if I do:
color value = image[pixel number][COLOR(row,col)] and I get... interpolated color?
Thanks
color value = image[pixel number][COLOR(row,col)], indeed, but it is not intended to use this way.
If one uses dcraw_process(), he will get image[] with interpolated colors after this call.
If one uses own processing, he hardly need 4-component image[] array.
raw2image is *compatibility layer* for some programs created to use with Libraw pre-0.15 (separate raw_image and image was introduced in version 0.15).
Pages