1) LibRaw uses Olympus SensorCalibration tag (1st value) as linear_max. It looks like this is not valid for specifically XZ-1 (XZ-2 and XZ-10 are ok). We'll issue some fix for that, thank you for the problem report.
2) For very dark (near-black, or just black w/ lens caps on) areas it is expected that some values are below black+cblack. For completely black shots (lens caps on), if no processing was performed (in camera) about half of pixels will be below that threshold.
3) linear_max is vendor specified (if any) 'specular white'
maximum is either guessed from format bit count (or bit count + linearization curve), or hardcoded (and may be adjusted with data_maximum is not turned off via params.adjust_maximum_thr).
maximum also may be adjusted on processing stage if LibRaw's exposure correction is used.
data_maximum is real data maximum calculated on current frame data.
There is no universal answer on 'what maximum should I use', it very depends on application targets.
Alex,
I noticed a few unexpected values for the black, maximum and linear_max fields when I was working with a collection of files in my repo.
1. For some files read from an Olympus XZ-1, linear_max values were lower than black values. linear_max reported values of {10, 10, 10, 10} but black was reported as {67, 67, 67, 67}.
2. For many files, when I do:
rp.open_file(fileName)
rp.unpack();
// memcpy the CFA data from rp.imgdata.rawdata.raw_image into my buffer
I expected the minimum values of the data in my buffer to be no less than value in black + cblack but I do see some entries that dont satisfy this. Does libraw do any additional processing on the CFA data at unpack()?
Also, before I perform my processing, I am trying to scale the CFA data into a full uint16 range. Hence, I require accurate values for minimum and maximum to use to perform this scaling. Not sure if I should use maximum or data_maximum or linear_max as the max-value.
For step 5: look into LibRaw::cam_xyz_coeff() source, it takes 'adobe ColorMatrix1/2' as cam_xyz and splits it into daylight color balance (pre_mul) and camera2srgb matrix (_rgb_cam).
For steps 2-4: scaling (usually) will not result into values above clipping (whitepoint), if whitepoint is scaled too. But WB may result into too big values that need to be clipped (so called 'pink clouds' problem), so either WB first, or WB + scaling + clipping in one step.
Could you tell me if there is a detailed instruction on what exactly needs to be done in order to independently compile a project for Windows? I mean that it was indicated exactly what to install and what to do, because personally my knowledge is not enough to bring everything together.
Thank you for your reply.
I thoght about it several month. But I’m not satisfaction about your solution.
If it would be that case, I open raw file by Adobe Photoshop.
I correctly say it what I wanna do. I want to see Mos file by windows explorer.
Otherwise I can't recognize raw file was saved with safety.
If I will do method your solution, I have to open raw file every time I copied.
It's hard. Please support preview mos file function.
I download v0.20 release for Windows.
When I try to convert GoPro GPR file I got message:
"Cannot open GPBK2696.GPR: Unsupported file format or not RAW file"
Is it possible work with this file with prebuild binaries or I must compile sources by myself?
Well LibRaw has the best colour rendering of Merrill X3Fs outside of SPP, you appear to be the only team to correctly apply the colour matrices! Affinity Photo is fine for Full Spectrum and Infracolour but it's useless for visible light photos − White Balance issues. I just wish that RawDigger had a DNG export option, rather than just TIFF.
Cheers.
P.S. with SPP offering such little control, I'm searching for a viable alternative. I did experiment with doing a manual conversion, using ImageJ, but that was more of a learning exercise − Foveon RAW is amazingly straight forward (and the metadata not that cryptic), it's a shame that only Iridient Developer seems to have really tried.
> the development team freely spoke of not understanding the X3F metadata
As if metadata is well-understood for any vendor ;) Run exiftool -U
and see - a lot of fields don't even have names, and if the name is known, it doesn't mean we always know how to apply the field value.
X3F Tools are still integrated in LibRaw. To enable it one needs to add USE_X3FTOOLS define while building LibRaw from source. So, nothing has changed for Affinity or other apps that use LibRaw internally.
What does the integration of X3F Tools mean for LibRaw's handling of Foveon RAW? The Kalpanika/X3F Tools project is incomplete, and abandoned, with some glaring issues − the development team freely spoke of not understanding the X3F metadata. What advantages are you seeing with this development?
I use Affinity Photo for Full Spectrum and Infracolour photography as SIGMA's own SPP is too limited regards toolset. I vastly prefer the LibRaw conversion e.g. RawDigger's RGB export) over X3F Tools' and I'd hate to lose that.
P.S. X3F Tools' use of NLM denoising is unsuited to Foveon, but I presume that that's for the RAW converter to implement and isn't part of LibRaw's handling of X3Fs?
Thanks.
Perhaps I lost patience when you wouldn't explain in the first instance why you felt the change was necessary. If so my apologies for letting my impatience show up in the tone of my posts. I entirely accept you consider the change to be right, but please why won't you explain your logic to people like me who are left to pick the pieces after you make an imcompatible change. From my perspective the conversation went a bit like this:
Me: You broke it
You: Yes but you can comment out these lines to make it like it was
Me: Oh OK how about making this configurable by an ifdef
You: No that would break things
Me: Please explain what/why
You: Silence.
"Since you allow yourself to dictate what we should do and how we should do it, we are forced to reduce our answers to the bare minimum."
I wasn't dictating what you should do by and large, I was mostly making suggestions such as different CFA for the user area and the full image. But rather than engaging in a dialog, you provided bare minimum answers which just might be the reason that I became frustrated and made one or two somewhat snarky remarks.
TBH if you had explained the whys and wherefores I in the first instance I might well have gone away happy. Now not so much. I just don't get why you folks won't explain stuff.
Since you allow yourself to dictate what we should do and how we should do it, we are forced to reduce our answers to the bare minimum.
So to answer general questions like
> PS you still haven't explained to me why the previous behaviour needed to be changed.
(repeated ad nauseam) does not make sense: the change has been made, we consider it to be right, and do not see it necessary to offer any explanations or excuses, especially when the tone you've suggested for the dialogue is hardly acceptable.
You are hardly in the position to demand anything, and especially anything we strongly disagree with, while using, free of charge, the results of our labor and expertise.
I have throughout this exchange been polite, I quite understand that it might be a bit more work for you to calculate the per channel black levels if the margins are odd values - but throughout you have REFUSED to explain why you made this change. Why won't you explain your reasons?
And if you think I don't understand something - then please tell me I'm wrong and WHY.
This change delivers nothing but pain for me and my users, and so far you still haven't explained why you did it. As for you suggestion that I need to learn more about raw file processing - I use LibRaw so I don't have to know everything there is to know about that.
I have interpreted your statement that having odd margins causes incorrect calculation of per channel black level caibration to mean that it results in LibRaw incorrectly calculating the cblack array. Have I misunderstood? If not why does this not work if you use the image CFA and image data, or if you use the "All Black" pixel array in the margins, can't you just use the appropriate CFA that matches the full frame as compared to the one that applies to the user image area.
If you would prefer that this discussion took place offline, I'm very happy to do that.
So you saying that you incorrectly calculate the black levels because it needed a bit of work to do it properly so you made this change instead - shame on you.
You should revert this change and do the per channel black level calculation based on the image area not the full sensor area.
Huh! Now I really don't understand - you said that my proposed change caused problem with the per channel black level calibration. Or in other words that the margins that were used by 0.19 and below caused a problem.
I am trying to understand WHY it causes a problem - if you calculate the per channel calibration on the image area (i.e. without the frame), surely it will be correct.
If you are saying the someone else is using LibRaw and is using the full frame area for astronomical image processing calibration frames then they are doing it wrong! And if they persuaded you to make this change to the margins, then you should revert it immediately.
It was a different statement: your proposed change (commenting out margins/filters adjustment) will result in incorrect interpretation of black level data (if such data is created from dark frame values)
I don't understand your statement that this change to the margin size (if odd) is necessary to ensure accurate per-channel black level calculation.
Surely if you apply the per channel calculation to the "Image" area (i.e. NOT including the frame) using the CFA pattern (filter) fr that area, then the calculation will be correct.
I can understand that if you used the CFA for the image area to calculate a per channel black level against the entire sensor area that might not work right, but I am sure you wouldn't do that.
So please explain why you believe my reasoning is incorrect.
I've been thinking about this issue at some length and I am trying to get my head around your assertion that the change in question is necessary to ensure accurate per-channel black level calculation. Surely if you apply the per-channel calculation to the "image" area using the correct filter mask for that, then it will all be correct? I can see that if you applied the "image" area filter mask to the whole RAW including the frame area then things could go awry. But you know better that to make that error.
So please explain why you believe my reasoning is incorrect.
Neither gamma, nor colour space chromaticities are relevant in a colour-managed workflow. For non-colourmanaged workflow (publishing to internet, for example), use sRGB D65 with simplified gamma 2.2, like Adobe are doing it. In any case, make sure the resulting profile is embedded into the output.
1) LibRaw uses Olympus SensorCalibration tag (1st value) as linear_max. It looks like this is not valid for specifically XZ-1 (XZ-2 and XZ-10 are ok). We'll issue some fix for that, thank you for the problem report.
2) For very dark (near-black, or just black w/ lens caps on) areas it is expected that some values are below black+cblack. For completely black shots (lens caps on), if no processing was performed (in camera) about half of pixels will be below that threshold.
3) linear_max is vendor specified (if any) 'specular white'
maximum is either guessed from format bit count (or bit count + linearization curve), or hardcoded (and may be adjusted with data_maximum is not turned off via params.adjust_maximum_thr).
maximum also may be adjusted on processing stage if LibRaw's exposure correction is used.
data_maximum is real data maximum calculated on current frame data.
There is no universal answer on 'what maximum should I use', it very depends on application targets.
Alex,
I noticed a few unexpected values for the black, maximum and linear_max fields when I was working with a collection of files in my repo.
1. For some files read from an Olympus XZ-1, linear_max values were lower than black values. linear_max reported values of {10, 10, 10, 10} but black was reported as {67, 67, 67, 67}.
2. For many files, when I do:
rp.open_file(fileName)
rp.unpack();
// memcpy the CFA data from rp.imgdata.rawdata.raw_image into my buffer
I expected the minimum values of the data in my buffer to be no less than value in black + cblack but I do see some entries that dont satisfy this. Does libraw do any additional processing on the CFA data at unpack()?
Also, before I perform my processing, I am trying to scale the CFA data into a full uint16 range. Hence, I require accurate values for minimum and maximum to use to perform this scaling. Not sure if I should use maximum or data_maximum or linear_max as the max-value.
Would appreciate your help.
Regards,
Dinesh
Hi Alex,
Thank you for answering my question in detail. I know similar questions popped many times in the past, I appreciate your patience.
I've reorganised the sequence of steps 2-4 and I will check LibRaw::cam_xyz_coeff() several more times : )
Thanks again!
For step 5: look into LibRaw::cam_xyz_coeff() source, it takes 'adobe ColorMatrix1/2' as cam_xyz and splits it into daylight color balance (pre_mul) and camera2srgb matrix (_rgb_cam).
For steps 2-4: scaling (usually) will not result into values above clipping (whitepoint), if whitepoint is scaled too. But WB may result into too big values that need to be clipped (so called 'pink clouds' problem), so either WB first, or WB + scaling + clipping in one step.
Could you tell me if there is a detailed instruction on what exactly needs to be done in order to independently compile a project for Windows? I mean that it was indicated exactly what to install and what to do, because personally my knowledge is not enough to bring everything together.
In LibRaw 0.20 it is listed as
ILCE-7RM2 (A7R II)
Dear Mr.Iliah Borg,
Thank you for your reply.
I thoght about it several month. But I’m not satisfaction about your solution.
If it would be that case, I open raw file by Adobe Photoshop.
I correctly say it what I wanna do. I want to see Mos file by windows explorer.
Otherwise I can't recognize raw file was saved with safety.
If I will do method your solution, I have to open raw file every time I copied.
It's hard. Please support preview mos file function.
Best regards.
LibRaw pre-compiled binaries are built without 3rd party external components (GoPro SDK, Adobe DNG SDK, JPEG library).
So, yes, you need to compile sources yourself with GoPro SDK. See README.GoPro.txt for details.
I download v0.20 release for Windows.
When I try to convert GoPro GPR file I got message:
"Cannot open GPBK2696.GPR: Unsupported file format or not RAW file"
Is it possible work with this file with prebuild binaries or I must compile sources by myself?
Well LibRaw has the best colour rendering of Merrill X3Fs outside of SPP, you appear to be the only team to correctly apply the colour matrices! Affinity Photo is fine for Full Spectrum and Infracolour but it's useless for visible light photos − White Balance issues. I just wish that RawDigger had a DNG export option, rather than just TIFF.
Cheers.
P.S. with SPP offering such little control, I'm searching for a viable alternative. I did experiment with doing a manual conversion, using ImageJ, but that was more of a learning exercise − Foveon RAW is amazingly straight forward (and the metadata not that cryptic), it's a shame that only Iridient Developer seems to have really tried.
> the development team freely spoke of not understanding the X3F metadata
As if metadata is well-understood for any vendor ;) Run
exiftool -U
and see - a lot of fields don't even have names, and if the name is known, it doesn't mean we always know how to apply the field value.
X3F Tools are still integrated in LibRaw. To enable it one needs to add USE_X3FTOOLS define while building LibRaw from source. So, nothing has changed for Affinity or other apps that use LibRaw internally.
What does the integration of X3F Tools mean for LibRaw's handling of Foveon RAW? The Kalpanika/X3F Tools project is incomplete, and abandoned, with some glaring issues − the development team freely spoke of not understanding the X3F metadata. What advantages are you seeing with this development?
I use Affinity Photo for Full Spectrum and Infracolour photography as SIGMA's own SPP is too limited regards toolset. I vastly prefer the LibRaw conversion e.g. RawDigger's RGB export) over X3F Tools' and I'd hate to lose that.
P.S. X3F Tools' use of NLM denoising is unsuited to Foveon, but I presume that that's for the RAW converter to implement and isn't part of LibRaw's handling of X3Fs?
Thanks.
Perhaps I lost patience when you wouldn't explain in the first instance why you felt the change was necessary. If so my apologies for letting my impatience show up in the tone of my posts. I entirely accept you consider the change to be right, but please why won't you explain your logic to people like me who are left to pick the pieces after you make an imcompatible change. From my perspective the conversation went a bit like this:
Me: You broke it
You: Yes but you can comment out these lines to make it like it was
Me: Oh OK how about making this configurable by an ifdef
You: No that would break things
Me: Please explain what/why
You: Silence.
"Since you allow yourself to dictate what we should do and how we should do it, we are forced to reduce our answers to the bare minimum."
I wasn't dictating what you should do by and large, I was mostly making suggestions such as different CFA for the user area and the full image. But rather than engaging in a dialog, you provided bare minimum answers which just might be the reason that I became frustrated and made one or two somewhat snarky remarks.
TBH if you had explained the whys and wherefores I in the first instance I might well have gone away happy. Now not so much. I just don't get why you folks won't explain stuff.
> I have throughout this exchange been polite
But failed:
> You should revert this change and do...
Since you allow yourself to dictate what we should do and how we should do it, we are forced to reduce our answers to the bare minimum.
So to answer general questions like
> PS you still haven't explained to me why the previous behaviour needed to be changed.
(repeated ad nauseam) does not make sense: the change has been made, we consider it to be right, and do not see it necessary to offer any explanations or excuses, especially when the tone you've suggested for the dialogue is hardly acceptable.
You are hardly in the position to demand anything, and especially anything we strongly disagree with, while using, free of charge, the results of our labor and expertise.
I have throughout this exchange been polite, I quite understand that it might be a bit more work for you to calculate the per channel black levels if the margins are odd values - but throughout you have REFUSED to explain why you made this change. Why won't you explain your reasons?
And if you think I don't understand something - then please tell me I'm wrong and WHY.
This change delivers nothing but pain for me and my users, and so far you still haven't explained why you did it. As for you suggestion that I need to learn more about raw file processing - I use LibRaw so I don't have to know everything there is to know about that.
I have interpreted your statement that having odd margins causes incorrect calculation of per channel black level caibration to mean that it results in LibRaw incorrectly calculating the cblack array. Have I misunderstood? If not why does this not work if you use the image CFA and image data, or if you use the "All Black" pixel array in the margins, can't you just use the appropriate CFA that matches the full frame as compared to the one that applies to the user image area.
If you would prefer that this discussion took place offline, I'm very happy to do that.
Thanks in advance
David, please stop it. You need to get a much better grasp of the technical issues with raw, and a way better attitude.
So you saying that you incorrectly calculate the black levels because it needed a bit of work to do it properly so you made this change instead - shame on you.
You should revert this change and do the per channel black level calculation based on the image area not the full sensor area.
Quote from the previous message:
> if such data is created from dark frame values
In other words: change to LibRaw::open_datastream() code is not enough if one want to have correct per-channel BL estimation too.
Huh! Now I really don't understand - you said that my proposed change caused problem with the per channel black level calibration. Or in other words that the margins that were used by 0.19 and below caused a problem.
I am trying to understand WHY it causes a problem - if you calculate the per channel calibration on the image area (i.e. without the frame), surely it will be correct.
If you are saying the someone else is using LibRaw and is using the full frame area for astronomical image processing calibration frames then they are doing it wrong! And if they persuaded you to make this change to the margins, then you should revert it immediately.
David
It was a different statement: your proposed change (commenting out margins/filters adjustment) will result in incorrect interpretation of black level data (if such data is created from dark frame values)
I don't understand your statement that this change to the margin size (if odd) is necessary to ensure accurate per-channel black level calculation.
Surely if you apply the per channel calculation to the "Image" area (i.e. NOT including the frame) using the CFA pattern (filter) fr that area, then the calculation will be correct.
I can understand that if you used the CFA for the image area to calculate a per channel black level against the entire sensor area that might not work right, but I am sure you wouldn't do that.
So please explain why you believe my reasoning is incorrect.
unfortunately I could not understand the exact wording of the question you are asking.
Alex,
I've been thinking about this issue at some length and I am trying to get my head around your assertion that the change in question is necessary to ensure accurate per-channel black level calculation. Surely if you apply the per-channel calculation to the "image" area using the correct filter mask for that, then it will all be correct? I can see that if you applied the "image" area filter mask to the whole RAW including the frame area then things could go awry. But you know better that to make that error.
So please explain why you believe my reasoning is incorrect.
Thank you
Neither gamma, nor colour space chromaticities are relevant in a colour-managed workflow. For non-colourmanaged workflow (publishing to internet, for example), use sRGB D65 with simplified gamma 2.2, like Adobe are doing it. In any case, make sure the resulting profile is embedded into the output.
Pages