I am checking the libraw open_file() and unpack() return codes - what warning field are you referring to? I looked the possible values for imgdata.process_warnings but none of those suggest a "camera not supported" condition.
Looking at the errors that are potentially returned (apart from LIBRAW_FILE_UNSUPPORTED which might be returned for a Canon .CR3 file until you add that support), I still don't see anything to tell me a camera isn't supported.
process_warnings - none of those conditions appear to tell me that a camera isn't supported, or should I interpret any value in process_warnings to indicate that the camera isn't supported?
black level data in LibRaw (and dcraw) is split into several pieces
color.black - 'base' black level (common for all channels)
color.cblack[0..3] - per channel level
color.cblack[6...] - pattern level for cblack[4] x cblack[5] pattern.
I did not understand your comments regarding white balance.
BTW, if you want to *only* replace de-bayer with your own code, but do not want to know other raw processing details (black level/bias, white balance, scaling, color profiles, etc, etc, etc)
than you may provide your own demosaic via callbacks.interpolate_bayer_cb
Your callback will be called instead pre-defined demosaic(s) built-in in LibRaw, while other processing steps will be called in sequence.
You may use LibRaw::lin_interpolate() code as an example.
Yes, I can read the code well enough to understand the basics of what scale_colors does to the de-Bayered 4 colour image array. My problem is to understand the code well enough to perform a "correct" transform on the raw_image array. As an example, it really is not clear what the gory details of the six component cblack array processing are - IOW why you don't/can't just subtract the basic black level and what the algorithm being used is precisely and so on. I'm sure that Dave Coffin and you are real experts on image processing, and fully understand the code and algorithms without needing lots of comments to understand the code in detail - we mere mortals struggle to grok such dense and relatively uncommented code.
Without that information I can only apply a trivial subtraction of the basic black level value from all the raw_image values and then scale linearly from 0- 65535! Clearly I cannot do any white balance calculation on the raw image as that requires that the data is already converted to 4 component RGB (does that mean it is LRGB?), and that I have access to the camera white balance data (unless doing "auto wb").
This is not about add-on library, but about data.
To replicate in-camera rendering you need to use:
a) same color profile (as used by camera firmware)
b) same exposure (midtone) compensation
c) same contrast curve
AFAIK, such data is not published by vendors. Generally, it is possible to measure this data using test shots (separate measure for each camera you want to support).
Document mode is not supported in our dcraw processing emulation (dcraw_process() call) because it makes things too complex.
Our unprocessed_raw.cpp and 4channels.cpp samples provides unprocessed raw data output (for bayer case), that is enough for most practical cases. It is also good starting point(s) for ones who want only unprocessed raw data access and do full processing.
Black subtraction: if LibRaw docs covering imgdata.color.black/imgdata.color.cblack[] is not enough, you may use LibRaw::subtract_black() source as an example. It operates with imgdata.image[] array, assuming that rawdata.raw_image[] values are already in place.
Also, LibRaw::raw2image_ex() do raw_image=>image population and black subtraction in single step.
AFAICT using the -4 flag means gamma coefficients both set to 1. So I *think* that means that gamma correction should make no changes. Is that correct?
Certainly -d and -D give quite different results - d is quite a bit brighter than the -D version - would that fit with black subtraction processing? Where in the code would I find the black subtraction (so I can do that processing myself on the RAW data?
Is it out of order to ask why the document mode code was removed?
AFAIK, -D is completely no processing, while -d looks like black subtraction is performed.
Also, both -D/-d creates gamma corrected image (if no other flags)
AFAIK that's exactly what dcraw -d -4 does (i.e. no processing at all apart from unpacking the raw image, extracting the image from the frame, and writing the PGM file). Have I misunderstood this?
So I'm having a bit of difficulty understanding why I'm seeing a difference?
Raw data pixels are in raw_image array.
These pixels are unaltered (in particular, black level/bias is not subtracted), unscaled, uncropped and no other processing is made.
So, if you need *unaltered* pixels: here it is. If you need some type of processing (black subtraction, white balance, scale, crop) you need to do it in your code.
I want to extract the raw image pixel data from the file loaded - I'm asking if that code I posted with a bug will extract the 16 bit raw image data without the frame that is width*height. Right now I'm not seeing what I expect when I look at RawData.raw_image as what DCRAW -t 0 -d -4 wrote to the PGM file including the header looks like (I've shown the binary data as hex):
P5
5202 3465
65535
04 fA 02 80 04 40
Correction to the bug
for (int row = 0; row < S.height; row++)
{
// Write raw pixel data into our private bitmap format
pFiller->Write(RAW(row+S.top_margin, S.left_margin), sizeof(ushort), S.width);
}
and I cannot find that data in the raw_image array at or near raw_image(row+S.top_margin, S.left_margin)
Here's a shortened version of the code I'm proposing to use to process the data at raw_image
#define RAW(row,col) \
Rawdata.raw_image[(row)*S.raw_width+(col)]
BOOL littleEndian = htons(0x55aa) != 0x55aa;
if (!m_bColorRAW)
{
// This is a regular RAW file, so we should have the "raw" 16-bit greyscale
// pixel array hung off RawData.raw_image.
ZASSERT(NULL != RawData.raw_image);
// stuff omitted
// Convert raw data to big-endian
if littleEndian
_swab(
(char*)(RawData.raw_image),
(char*)(RawData.raw_image),
S.raw_height*S.raw_pitch); // Use number of rows times row width in BYTES!!
for (int row = 0; row < S.height; row++)
{
// Write raw pixel data into our private bitmap format
pFiller->Write(RAW(raw, S.left_margin), sizeof(ushort), S.width);
}
Image rectangle may not be centered in frame, this is camera-specific, so one need to use left/top_margin. You may use LibRaw::copy_bayer() and it's calling code (raw2image) as an example (neat bit of code).
Note: it is safer to use imgdata.sizes.raw_pitch (in bytes!) to access rows. Usually it is just multiplication of raw_width and pixel size, but there are some exceptions (e.g. if LibRaw is compiled with RawSpeed library).
Got it - is the data in the ushort arrays in "network byte order" or do I need to do a byte-swap?
Thank you.
AFAICT, the image at (e.g.) raw_image is raw_width*raw_height.
How to actually skip the image frame/border - do I need to offset by top_margin/left margin? Do you have a neat bit of code to iterate through the rows and skip the margins (I don't want to reinvent the wheel) so I only extract the rectangle that is width*height and (I assume) centred in the frame?
Thumbnail is generated by camera, using own camera settings (color profile, contrast curve). There is no built-in way (in LibRaw) to replicate out-of-camera JPEG.
Unaltered raw pixels are stored in imgdata.rawdata:
raw_image: for bayer/x-trans/monochrome (single component per pixel)
color3_image, color4_image - for 3/4 full-color images.
I am checking the libraw open_file() and unpack() return codes - what warning field are you referring to? I looked the possible values for imgdata.process_warnings but none of those suggest a "camera not supported" condition.
Looking at the errors that are potentially returned (apart from LIBRAW_FILE_UNSUPPORTED which might be returned for a Canon .CR3 file until you add that support), I still don't see anything to tell me a camera isn't supported.
Thanks
process_warnings - none of those conditions appear to tell me that a camera isn't supported, or should I interpret any value in process_warnings to indicate that the camera isn't supported?
https://www.libraw.org/docs/API-datastruct.html#datastruct :
unsigned int process_warnings;
There is no such call.
Check open_file/unpack return code(s) and/or warning field.
black level data in LibRaw (and dcraw) is split into several pieces
color.black - 'base' black level (common for all channels)
color.cblack[0..3] - per channel level
color.cblack[6...] - pattern level for cblack[4] x cblack[5] pattern.
I did not understand your comments regarding white balance.
BTW, if you want to *only* replace de-bayer with your own code, but do not want to know other raw processing details (black level/bias, white balance, scaling, color profiles, etc, etc, etc)
than you may provide your own demosaic via callbacks.interpolate_bayer_cb
Your callback will be called instead pre-defined demosaic(s) built-in in LibRaw, while other processing steps will be called in sequence.
You may use LibRaw::lin_interpolate() code as an example.
Yes, I can read the code well enough to understand the basics of what scale_colors does to the de-Bayered 4 colour image array. My problem is to understand the code well enough to perform a "correct" transform on the raw_image array. As an example, it really is not clear what the gory details of the six component cblack array processing are - IOW why you don't/can't just subtract the basic black level and what the algorithm being used is precisely and so on. I'm sure that Dave Coffin and you are real experts on image processing, and fully understand the code and algorithms without needing lots of comments to understand the code in detail - we mere mortals struggle to grok such dense and relatively uncommented code.
Without that information I can only apply a trivial subtraction of the basic black level value from all the raw_image values and then scale linearly from 0- 65535! Clearly I cannot do any white balance calculation on the raw image as that requires that the data is already converted to 4 component RGB (does that mean it is LRGB?), and that I have access to the camera white balance data (unless doing "auto wb").
Regards
This is not about add-on library, but about data.
To replicate in-camera rendering you need to use:
a) same color profile (as used by camera firmware)
b) same exposure (midtone) compensation
c) same contrast curve
AFAIK, such data is not published by vendors. Generally, it is possible to measure this data using test shots (separate measure for each camera you want to support).
scale_colors():
1) subtracts black
2)applies white balance
3) scales values to 0...65535 range
1) RawDigger uses LibRaw postprocessing code
2) RawDigger's default is no_auto_bright = 0
Can you recommend any add-on libraw library, which can make better replication of JPEG?
Thank you! A very helpful couple of posts
BTW, I've already mentioned copy_bayer() call in previous reply: https://www.libraw.org/comment/5246#comment-5246
Document mode is not supported in our dcraw processing emulation (dcraw_process() call) because it makes things too complex.
Our unprocessed_raw.cpp and 4channels.cpp samples provides unprocessed raw data output (for bayer case), that is enough for most practical cases. It is also good starting point(s) for ones who want only unprocessed raw data access and do full processing.
Black subtraction: if LibRaw docs covering imgdata.color.black/imgdata.color.cblack[] is not enough, you may use LibRaw::subtract_black() source as an example. It operates with imgdata.image[] array, assuming that rawdata.raw_image[] values are already in place.
Also, LibRaw::raw2image_ex() do raw_image=>image population and black subtraction in single step.
AFAICT using the -4 flag means gamma coefficients both set to 1. So I *think* that means that gamma correction should make no changes. Is that correct?
Certainly -d and -D give quite different results - d is quite a bit brighter than the -D version - would that fit with black subtraction processing? Where in the code would I find the black subtraction (so I can do that processing myself on the RAW data?
Is it out of order to ask why the document mode code was removed?
Thanks again
AFAIK, -D is completely no processing, while -d looks like black subtraction is performed.
Also, both -D/-d creates gamma corrected image (if no other flags)
AFAIK that's exactly what dcraw -d -4 does (i.e. no processing at all apart from unpacking the raw image, extracting the image from the frame, and writing the PGM file). Have I misunderstood this?
So I'm having a bit of difficulty understanding why I'm seeing a difference?
Cheers
Raw data pixels are in raw_image array.
These pixels are unaltered (in particular, black level/bias is not subtracted), unscaled, uncropped and no other processing is made.
So, if you need *unaltered* pixels: here it is. If you need some type of processing (black subtraction, white balance, scale, crop) you need to do it in your code.
I want to extract the raw image pixel data from the file loaded - I'm asking if that code I posted with a bug will extract the 16 bit raw image data without the frame that is width*height. Right now I'm not seeing what I expect when I look at RawData.raw_image as what DCRAW -t 0 -d -4 wrote to the PGM file including the header looks like (I've shown the binary data as hex):
P5
5202 3465
65535
04 fA 02 80 04 40
Correction to the bug
for (int row = 0; row < S.height; row++)
{
// Write raw pixel data into our private bitmap format
pFiller->Write(RAW(row+S.top_margin, S.left_margin), sizeof(ushort), S.width);
}
and I cannot find that data in the raw_image array at or near raw_image(row+S.top_margin, S.left_margin)
I still do not know what you want to achieve, so this may be correct, may be not.
Is this code does what you want or not?
Thanks yet again,
Here's a shortened version of the code I'm proposing to use to process the data at raw_image
Does that make sense? Is it correct?
Thanks
Data is in native byte order.
Image rectangle may not be centered in frame, this is camera-specific, so one need to use left/top_margin. You may use LibRaw::copy_bayer() and it's calling code (raw2image) as an example (neat bit of code).
Note: it is safer to use imgdata.sizes.raw_pitch (in bytes!) to access rows. Usually it is just multiplication of raw_width and pixel size, but there are some exceptions (e.g. if LibRaw is compiled with RawSpeed library).
Got it - is the data in the ushort arrays in "network byte order" or do I need to do a byte-swap?
Thank you.
AFAICT, the image at (e.g.) raw_image is raw_width*raw_height.
How to actually skip the image frame/border - do I need to offset by top_margin/left margin? Do you have a neat bit of code to iterate through the rows and skip the margins (I don't want to reinvent the wheel) so I only extract the rectangle that is width*height and (I assume) centred in the frame?
Thanks again
LIBRAW_LIBRARY_BUILD is already defined in dcraw_common.cpp and dcraw_fileio.cpp
Thumbnail is generated by camera, using own camera settings (color profile, contrast curve). There is no built-in way (in LibRaw) to replicate out-of-camera JPEG.
Unaltered raw pixels are stored in imgdata.rawdata:
raw_image: for bayer/x-trans/monochrome (single component per pixel)
color3_image, color4_image - for 3/4 full-color images.
Pages