I'm confused again. Maybe I missed something else.
In the code samples that I pasted, my trouble was that these were only memory allocation of buffers, and I failed to see where the data from the .CR2 file where going. Then imgdata.image seemed the only time when some non-zero data were passed. But I'm wrong since from what you say imgdata.image is not populated with any raw or processed data at that time: the imgdata.image gets populated only after "raw2image()" or "dcraw_process()". So the quoted codes I sent aren't all there is to see regarding how the raw_data gets populated from a Canon CR2 file , right? Again, the code I sent are only memory allocation, so am i not missing the part where the raw_image is populated with the actual raw data and is not just given the pointer to an initialized, non-populated buffer? That's basically what I'm missing: where in the code is that buffer, whose pointer is given to raw_image, populated with the raw data from the CR2 file.
Just to clarify something else. Can I assume i'm not going into the rawSpeed related blocks when using Bayer image from canon CR2? This is to make sure i'm not missing anything in the pipeline. It was not clear to me what rawSpeed was, and if having a canon DSLR (5d mark III) was of any concern to this.
imgdata.image is populated in
1) raw2image() call (use it for compatibility, if you need to use 4-component image[] in your code)
2) dcraw_process() calls raw2image_ex() call, which do populating and black level extraction in single pass.
raw_alloc is just a pointer to allocation (to be free()-ed at recycle() call)
raw_image, color3_image and color4_image are pointers to pixel buffer (allocated by LibRaw or by RawSpeed). Only one pointer is non-zero for given image and this is the only way to know exact image format (1-component bayer/BW, or 3-component LinearDNG /Canon sRAW extracted by RawSpeed, or 4-component 3/4 color image extracted by LibRaw /LinearDNG, Canon sRAW).
The last piece of code (with imagedata.image set to 0) is a way to handle non-bayer image extracted by LibRaw: LinearDNG, canon sraw and 4-shot sinar unpackers works with imgdata.image[] (the code is from dcraw), so after unpacking we need to assign color4_image pointer (and raw_alloc for correct release in recycle()), and clear imgdata.image pointer
Things are so complicated because so many RAW flavours exists :(
Thanks Lexa, I understand a bit better now, when I look at the unpack() source code.
I'm still missing maybe a last piece of the puzzle to understand the basic pipeline in my particular case:
Canon EOS 5D mark III, and a good old fashioned EOS 350D (although what follows applies to more kind of Bayer sensors).
In unpack(), I see (assuming I do not have/use rawSpeed):
else if(imgdata.idata.filters || P1.colors == 1) // Bayer image or single color -> decode to raw_image
{
imgdata.rawdata.raw_alloc = malloc(rwidth*(rheight+8)*sizeof(imgdata.rawdata.raw_image[0]));
imgdata.rawdata.raw_image = (ushort*) imgdata.rawdata.raw_alloc;
So you seem to populate the raw_alloc (and thus raw_image?) with imgdata.image (correct me if i'm wrong). But I still have a hard time tracking up where imgdata.image was populated on the first place.
Looking into libraw_cxx.cpp, I don't see much either. It would be nice to point me where imgdata.image gets the raw data from the .CR2 file. I see hasselblad_full_load_raw() that seems to do something with it, but I'm not sure if that's what i'm looking for. Maybe it happens within open_file and subsequent "stream" functions?
Sorry, docs slightly outdated in this particular place, to be fixed ASAP.
unpack() in current LibRaw (0.16+) stores raw data in imgdata.rawdata.raw_image (or color3_image, or color4_image). This is one component per pixel for raw_image (bayer)
raw2image() pupulates imgdata.rawdata into imgdata.image[][4] array (4 components per pixel, but only one filled with value for bayer images).
image[][4] than used to all postprocessing by dcraw_process()
this is because of modification made on 0.16 version. Prior to it, unpack() works with imgdata.image[] directly, so
1) multiple processing (dcraw_process()) of same raw data with different settings was impossible.
2) it application uses only raw data and do not need dcraw_process() (so, do own processint), image[] is 4x waste of memory
Please pay attention to the documentation we supply. A lot of effort went into it, and I do not see any reason to continue selected quoting from there.
Ok. I was confused with raw2image().
So after unpack(), an unaltered version of the unpacked data is kept elsewhere than imgdata.image.... Thank you... :)
I thought that unpack() was filling imgdata.image with non demosaiced data (only one component per pixel on Bayer layouts), and that dcraw_process() was reading those values and filling the missing components in the same buffer using its demosaicing algorithm...
Is dcraw_process() reading it's non demosaiced data from another buffer than imgdata.image?
То avoid colored highlights, you need to clip all channels at same level (after wb applied).
So, if red clips at 60000 and green at 40000 (as in your example values), you need to clip all three channels at green clip level.
Without it you'll get colored (magenta in this case) highlights (and we *frequenly* see it in video, even in high-level, such as Formula-1 translation)
I've seen in this comment that half_size is lossless for bayer cameras.
Does that mean that the default demosaicing algorithm (LGPL licenced) in libRaw is as simple as what half_size does (regrouping pixels by groups of 4)?
Or does dcraw_process() use a more evolved algorithm? If yes, what is this algorithm? DCB?
imgdata.image appears temporarily within unpack() (before (*load_raw)() call) if needed, than hided again.
If you need imgdata.image[] in your code, call raw2image() after unpack() to get it.
I'm confused again. Maybe I missed something else.
In the code samples that I pasted, my trouble was that these were only memory allocation of buffers, and I failed to see where the data from the .CR2 file where going. Then imgdata.image seemed the only time when some non-zero data were passed. But I'm wrong since from what you say imgdata.image is not populated with any raw or processed data at that time: the imgdata.image gets populated only after "raw2image()" or "dcraw_process()". So the quoted codes I sent aren't all there is to see regarding how the raw_data gets populated from a Canon CR2 file , right? Again, the code I sent are only memory allocation, so am i not missing the part where the raw_image is populated with the actual raw data and is not just given the pointer to an initialized, non-populated buffer? That's basically what I'm missing: where in the code is that buffer, whose pointer is given to raw_image, populated with the raw data from the CR2 file.
Just to clarify something else. Can I assume i'm not going into the rawSpeed related blocks when using Bayer image from canon CR2? This is to make sure i'm not missing anything in the pipeline. It was not clear to me what rawSpeed was, and if having a canon DSLR (5d mark III) was of any concern to this.
Thanks (a lot!)
imgdata.image is populated in
1) raw2image() call (use it for compatibility, if you need to use 4-component image[] in your code)
2) dcraw_process() calls raw2image_ex() call, which do populating and black level extraction in single pass.
raw_alloc is just a pointer to allocation (to be free()-ed at recycle() call)
raw_image, color3_image and color4_image are pointers to pixel buffer (allocated by LibRaw or by RawSpeed). Only one pointer is non-zero for given image and this is the only way to know exact image format (1-component bayer/BW, or 3-component LinearDNG /Canon sRAW extracted by RawSpeed, or 4-component 3/4 color image extracted by LibRaw /LinearDNG, Canon sRAW).
The last piece of code (with imagedata.image set to 0) is a way to handle non-bayer image extracted by LibRaw: LinearDNG, canon sraw and 4-shot sinar unpackers works with imgdata.image[] (the code is from dcraw), so after unpacking we need to assign color4_image pointer (and raw_alloc for correct release in recycle()), and clear imgdata.image pointer
Things are so complicated because so many RAW flavours exists :(
Thanks Lexa, I understand a bit better now, when I look at the unpack() source code.
I'm still missing maybe a last piece of the puzzle to understand the basic pipeline in my particular case:
Canon EOS 5D mark III, and a good old fashioned EOS 350D (although what follows applies to more kind of Bayer sensors).
In unpack(), I see (assuming I do not have/use rawSpeed):
(...)
(...)
So you seem to populate the raw_alloc (and thus raw_image?) with imgdata.image (correct me if i'm wrong). But I still have a hard time tracking up where imgdata.image was populated on the first place.
Looking into libraw_cxx.cpp, I don't see much either. It would be nice to point me where imgdata.image gets the raw data from the .CR2 file. I see hasselblad_full_load_raw() that seems to do something with it, but I'm not sure if that's what i'm looking for. Maybe it happens within open_file and subsequent "stream" functions?
Thank you for your help
Raphael
Sorry, docs slightly outdated in this particular place, to be fixed ASAP.
unpack() in current LibRaw (0.16+) stores raw data in imgdata.rawdata.raw_image (or color3_image, or color4_image). This is one component per pixel for raw_image (bayer)
raw2image() pupulates imgdata.rawdata into imgdata.image[][4] array (4 components per pixel, but only one filled with value for bayer images).
image[][4] than used to all postprocessing by dcraw_process()
this is because of modification made on 0.16 version. Prior to it, unpack() works with imgdata.image[] directly, so
1) multiple processing (dcraw_process()) of same raw data with different settings was impossible.
2) it application uses only raw data and do not need dcraw_process() (so, do own processint), image[] is 4x waste of memory
Sorry...
http://www.libraw.org/docs/API-datastruct-eng.html#libraw_image_sizes_t
flip field.
Please pay attention to the documentation we supply. A lot of effort went into it, and I do not see any reason to continue selected quoting from there.
That explains all... ;)
I use the image[] array directly: is there a way to know, using LibRaw, if I should rotate it?
Checked with raw from A7R-II review: http://www.photographyblog.com/reviews/sony_a7r_ii_review/sample_images/ (raw #95)
dcraw_emu (without any parameters) rotates the image vertical and in correct angle.
Update:
LibRaw/dcraw rotate image on output, not the image[] array.
Thanks... That's very nice to have your help... :)
user_mul is used only in dcraw_process().
For other parameters that may affect unpack() please read API notes: http://www.libraw.org/docs/API-notes-eng.html#imgdata_params
Thank you! I had missed that one!
Is user_mul being used in unpack() or just in dcraw_process()?
In other terms, do I have to:
change user_mul
unpack()
dcraw_process()
or is it enougth to:
change user_mul
dcraw_process()?
Sylvain.
use imgdata.params.user_mul if you need to set manual white balance.
Well, I've done tests about this, which didn't work.
Here is my workflow:
open_file
set pre_mul
unpack
dcraw_process(1)
change pre_mul
dcraw process(2)
Change pre_mul gets ignored.
I've tried with adding unpack just before dcraw_process(2), but change pre_mul is still ignored.
That's no big deal for me, I simply recycle() and (re)opent(). I just wanted to keep you posted... ;)
Sylvain.
Ok. I was confused with raw2image().
So after unpack(), an unaltered version of the unpacked data is kept elsewhere than imgdata.image.... Thank you... :)
unpack() do not fill/allocate image array
I thought that unpack() was filling imgdata.image with non demosaiced data (only one component per pixel on Bayer layouts), and that dcraw_process() was reading those values and filling the missing components in the same buffer using its demosaicing algorithm...
Is dcraw_process() reading it's non demosaiced data from another buffer than imgdata.image?
You can call dcraw_process() multiple times, no additional data move needed.
So that's how dcraw_process() works in that case: clipping everybody to 40000, and stretching that from 40000 to 65535?
То avoid colored highlights, you need to clip all channels at same level (after wb applied).
So, if red clips at 60000 and green at 40000 (as in your example values), you need to clip all three channels at green clip level.
Without it you'll get colored (magenta in this case) highlights (and we *frequenly* see it in video, even in high-level, such as Formula-1 translation)
Thank you! :)
http://www.libraw.org/docs/API-datastruct-eng.html#libraw_output_params_t
look for user_qual parameter in settings.
Default is 3 (AHD)
I've seen in this comment that half_size is lossless for bayer cameras.
Does that mean that the default demosaicing algorithm (LGPL licenced) in libRaw is as simple as what half_size does (regrouping pixels by groups of 4)?
Or does dcraw_process() use a more evolved algorithm? If yes, what is this algorithm? DCB?
Thanks... :)
Sylvain.
Thanks Alex!
imgdata.idata.cdesc contatins color description string (RGBG or CMYG)
Use dcraw-like code to output it, substitute fcol() with COLOR()
Pages