(guessing by filename) unit4.obj is (most likely) generated from unit4.cpp or unit4.c or something like that.
There is no file(s) with same or similar name(s) in LibRaw. So, unable to help.
Thank you, I think that got me closer! I'm still a little puzzled about how the white balance coefficients work (they appear to be relative, rather than absolute?) but I'll see if I can figure it out. Thanks again!
I'm wondering if each of the channels possible has a different white and black point, and I need to determine what the min and max value per channel is? I'm really baffled.
Hey Alex, thanks for the reply! I'm aware that I will have to adjust for the mask and for the low contrast in the negative. At this point I just want to get the inversion step right so that I can be a little more certain that my overall approach will work. I updated my code as you suggested, but I'm still getting an entirely magenta image. Here's my loop:
1st:
raw_image[] values are 'as decoded from RAW file', so black level is not subtracted.
If you're making 'in house' application, so the only camera you plan to use is A7R-4, the exact black level for this camera is 512.
so inverted_value = maximum - original_value + 512 (maximum is also has black level within it).
If you plan to use multiple cameras, look into src/preprocessing/subtract_black.cpp source for black subtraction code and into src/utils/utils_libraw.cpp:adjust_bl() function for additional details.
2nd: If you're processing color negatives, inversion in linear domain will not result in acceptable image color because of color masking used in that negatives ( http://www.brianpritchard.com/why_colour_negative_is_orange.htm )
Also, for both color and BW negatives you'll need to adjust contrast.
It works using your suggestion. Thank you. Do you happen to know what is the exact mathematical formula f(x) for the gamma correction and how the two following parameters are used ?
imgdata.params.gamm[0]
imgdata.params.gamm[1]
I reckon the first one is the power and the second one is the toe. I have no clues for the second one. Also I suppose that the variable x range should be between 0 and 1. I may have to reuse the gamma correction in a separate process later down the processing pipeline. Thank you very much.
This is for some special case, asked by our users (I can’t remember the specific details), not targeted for 'normal' use, so not documented in details.
The documentation could be better, it's true :)
P.S. Yes, scale_colors() do range scale and WB in single step.
I guess what confused me was that scale_colors() also includes (or rather excludes when disabled) white balance, which is usually important in exactly the advanced interpolation use case you described... no biggie, because the advanced user could easily do the white balance before their custom interpolation, it was just unexpected/undocumented, so at least making it somewhat more explicit in the API docs would help avoid having to find out the hard way ;)
image[][4] is used for both intermediate results and for final result.
After dcraw_proces() is called,
image[i][0..2] contains final image in 16-bit and in linear space for i-th pixel
image[i][3] contains some garbage (not used intermediate results, etc).
Also, image[] is not rotated (if rotation is set via metadata or via user_flip)
make_mem_image prepares final result: 3 components per pixel, gamma corrected (according to imgdata.params.gamm[]), 8 or 16 bit (according to params.output_bps)
Thanks again for the information. So except for the difference in size in one case the gamma correction and some other processes occurs when the tiff is created (with dcraw_process() ) and dcraw_make_mem_image() really produces the final processed pixel data.
Alright thank you. So no easy way except using an external library then. Also I wanted to ask. What exactly is the difference between the RawProcessor.imgdata.image pointer obtained after dcraw_process() and dcraw_make_mem_image(). Is there a memory copy done or is it the same pointer ?
Use LibRaw::dcraw_make_mem_image(): https://www.libraw.org/docs/API-CXX.html#dcraw_make_mem_image to create in-memory RGB data array, than write it to preferred file format (e.g. tiff) using preferred library (e.g. libtiff) with preferred options.
Libraw just delivers decoded raw data to prosessing application. Processing (e.g. demosaicing, white balancing, channel mixing, etc, while demosaicing is not needed for Foveon data) is performed in calling application (e.g. Affinity).
LibRaw contains some postprocessing code (derived from dcraw), it is not intended to use in any professional-level application, this is mostly 'proof of concept'. We do not have any plans to change LibRaw postprocessing code.
In the interim, and for the sake of progress, I will assume that the LibRaw library permits opening an X3F file from the indicated cameras, and provides Affinity the option to request linear or log brightness levels for each RGB channel, including the ability to not apply any demosaicing, sharpening or noise-removal.
That would explain how assembling a 'monochrome' image is entirely up to the raw developer.
In the only descriptive observation I have been able to obtain, Affinity indicated that
"...following one of your previous forum posts, and according to `LibRAW`, your images are colour images.". With no other elaboration, they have confirmed that they know exactly what the problem is.
I will ask Affinity to reach out again.
Much appreciated.
I'm just a user stuck with a long-outstanding issue, trying to be as supportive as possible to reach a resolution, even if that's a kluge I need to incorporate. I will do whatever it takes (short of changing the camera system, or having to use LightRoom). I haven't gotten anywhere in a frustratingly long time. Anything constructive you can provide will be hugely appreciated.
Thanks!
(guessing by filename) unit4.obj is (most likely) generated from unit4.cpp or unit4.c or something like that.
There is no file(s) with same or similar name(s) in LibRaw. So, unable to help.
Thank you, I think that got me closer! I'm still a little puzzled about how the white balance coefficients work (they appear to be relative, rather than absolute?) but I'll see if I can figure it out. Thanks again!
Black point is same for all channels in Sony files (and, yes, it is 512 in the file you shared).
The problem is white balance. Here is your image in 'raw composite' view: https://www.dropbox.com/s/iowl9jkhgzhxqy0/screenshot%202020-04-22%2019.3... (not white balanced, green channel(s) is strongest as expected)
So, I see two possible ways:
1) If you'll go 'raw data inversion' way: Invert white balance coefficients too.
2) Generate proper negative image in linear space (so, no raw data inversion, normal processing with linear gamma output), than invert it.
Sure, no problem! Here's the file: http://ur.sine.com/temp/tether8387.dng
I'm wondering if each of the channels possible has a different white and black point, and I need to determine what the min and max value per channel is? I'm really baffled.
Could you please provide link to the RAW file too, it is very interesting to play w/ it.
Hey Alex, thanks for the reply! I'm aware that I will have to adjust for the mask and for the low contrast in the negative. At this point I just want to get the inversion step right so that I can be a little more certain that my overall approach will work. I updated my code as you suggested, but I'm still getting an entirely magenta image. Here's my loop:
This is the original images: http://ur.sine.com/temp/original.png
And this is the output of that code: http://ur.sine.com/temp/output.png
1st:
raw_image[] values are 'as decoded from RAW file', so black level is not subtracted.
If you're making 'in house' application, so the only camera you plan to use is A7R-4, the exact black level for this camera is 512.
so inverted_value = maximum - original_value + 512 (maximum is also has black level within it).
If you plan to use multiple cameras, look into src/preprocessing/subtract_black.cpp source for black subtraction code and into src/utils/utils_libraw.cpp:adjust_bl() function for additional details.
2nd: If you're processing color negatives, inversion in linear domain will not result in acceptable image color because of color masking used in that negatives ( http://www.brianpritchard.com/why_colour_negative_is_orange.htm )
Also, for both color and BW negatives you'll need to adjust contrast.
Thanks. I was able to find the function in curves.cpp. Its more complicated that I thought.
I m guessing that gamma[0] is pwr and gamma[1] is ts. I don't know what mode is but 1 seems to work for me.
Look into gamma_curve() code
It works using your suggestion. Thank you. Do you happen to know what is the exact mathematical formula f(x) for the gamma correction and how the two following parameters are used ?
I reckon the first one is the power and the second one is the toe. I have no clues for the second one. Also I suppose that the variable x range should be between 0 and 1. I may have to reuse the gamma correction in a separate process later down the processing pipeline. Thank you very much.
This is for some special case, asked by our users (I can’t remember the specific details), not targeted for 'normal' use, so not documented in details.
The documentation could be better, it's true :)
P.S. Yes, scale_colors() do range scale and WB in single step.
I guess what confused me was that scale_colors() also includes (or rather excludes when disabled) white balance, which is usually important in exactly the advanced interpolation use case you described... no biggie, because the advanced user could easily do the white balance before their custom interpolation, it was just unexpected/undocumented, so at least making it somewhat more explicit in the API docs would help avoid having to find out the hard way ;)
That's very useful. Thanks for the info.
image[][4] is used for both intermediate results and for final result.
After dcraw_proces() is called,
image[i][0..2] contains final image in 16-bit and in linear space for i-th pixel
image[i][3] contains some garbage (not used intermediate results, etc).
Also, image[] is not rotated (if rotation is set via metadata or via user_flip)
make_mem_image prepares final result: 3 components per pixel, gamma corrected (according to imgdata.params.gamm[]), 8 or 16 bit (according to params.output_bps)
Thanks again for the information. So except for the difference in size in one case the gamma correction and some other processes occurs when the tiff is created (with dcraw_process() ) and dcraw_make_mem_image() really produces the final processed pixel data.
imgdata.image is 4-component (per pixel) array: https://www.libraw.org/docs/API-datastruct.html#libraw_data_t
dcraw_make_mem_image will create 3-component (and gamma corrected) array): https://www.libraw.org/docs/API-datastruct.html#libraw_processed_image_t
(I also suggest to read docs and samples source code)
Alright thank you. So no easy way except using an external library then. Also I wanted to ask. What exactly is the difference between the RawProcessor.imgdata.image pointer obtained after dcraw_process() and dcraw_make_mem_image(). Is there a memory copy done or is it the same pointer ?
LibRaw' tiff writer is very simplified.
Use LibRaw::dcraw_make_mem_image(): https://www.libraw.org/docs/API-CXX.html#dcraw_make_mem_image to create in-memory RGB data array, than write it to preferred file format (e.g. tiff) using preferred library (e.g. libtiff) with preferred options.
Libraw just delivers decoded raw data to prosessing application. Processing (e.g. demosaicing, white balancing, channel mixing, etc, while demosaicing is not needed for Foveon data) is performed in calling application (e.g. Affinity).
LibRaw contains some postprocessing code (derived from dcraw), it is not intended to use in any professional-level application, this is mostly 'proof of concept'. We do not have any plans to change LibRaw postprocessing code.
Your demarcation is essentially correct.
Thanks again, Iliah.
In the interim, and for the sake of progress, I will assume that the LibRaw library permits opening an X3F file from the indicated cameras, and provides Affinity the option to request linear or log brightness levels for each RGB channel, including the ability to not apply any demosaicing, sharpening or noise-removal.
That would explain how assembling a 'monochrome' image is entirely up to the raw developer.
In the only descriptive observation I have been able to obtain, Affinity indicated that
"...following one of your previous forum posts, and according to `LibRAW`, your images are colour images.". With no other elaboration, they have confirmed that they know exactly what the problem is.
I will ask Affinity to reach out again.
Much appreciated.
I see. Thank you very much.
IIQ S is not (publicly) documented and not reverse engineered (at least, there is no any opensource decoders).
We do not expect IIQ S support in foreseeable future.
Dear Sir:
Without clear communication from Affinity we can't help, I'm afraid. We simply don't understand the issue they are having.
Again, my apologies Iliah.
I'm just a user stuck with a long-outstanding issue, trying to be as supportive as possible to reach a resolution, even if that's a kluge I need to incorporate. I will do whatever it takes (short of changing the camera system, or having to use LightRoom). I haven't gotten anywhere in a frustratingly long time. Anything constructive you can provide will be hugely appreciated.
Thanks!
There are two options for (no) auto-scaling:
1) params.no_auto_bright - disables ETTR(-like) automated brightness correction, entire image is scaled to 65535/(metadata_derived_maximum-black) instead of 65535/(real_data_max_by_histogram - black).
2) params.no_auto_scale - disables entire scale_colors() call (for example, to get not modified data in image array).
Second case is special use (e.g. someone may want to do own interpolation via callback and want to see unchanged data on this step).
In (normal) processing case scaling is significant to get same scaled data from all different sensor bit-counts.
Pages