So you saying that you incorrectly calculate the black levels because it needed a bit of work to do it properly so you made this change instead - shame on you.
You should revert this change and do the per channel black level calculation based on the image area not the full sensor area.
Huh! Now I really don't understand - you said that my proposed change caused problem with the per channel black level calibration. Or in other words that the margins that were used by 0.19 and below caused a problem.
I am trying to understand WHY it causes a problem - if you calculate the per channel calibration on the image area (i.e. without the frame), surely it will be correct.
If you are saying the someone else is using LibRaw and is using the full frame area for astronomical image processing calibration frames then they are doing it wrong! And if they persuaded you to make this change to the margins, then you should revert it immediately.
It was a different statement: your proposed change (commenting out margins/filters adjustment) will result in incorrect interpretation of black level data (if such data is created from dark frame values)
I don't understand your statement that this change to the margin size (if odd) is necessary to ensure accurate per-channel black level calculation.
Surely if you apply the per channel calculation to the "Image" area (i.e. NOT including the frame) using the CFA pattern (filter) fr that area, then the calculation will be correct.
I can understand that if you used the CFA for the image area to calculate a per channel black level against the entire sensor area that might not work right, but I am sure you wouldn't do that.
So please explain why you believe my reasoning is incorrect.
I've been thinking about this issue at some length and I am trying to get my head around your assertion that the change in question is necessary to ensure accurate per-channel black level calculation. Surely if you apply the per-channel calculation to the "image" area using the correct filter mask for that, then it will all be correct? I can see that if you applied the "image" area filter mask to the whole RAW including the frame area then things could go awry. But you know better that to make that error.
So please explain why you believe my reasoning is incorrect.
Neither gamma, nor colour space chromaticities are relevant in a colour-managed workflow. For non-colourmanaged workflow (publishing to internet, for example), use sRGB D65 with simplified gamma 2.2, like Adobe are doing it. In any case, make sure the resulting profile is embedded into the output.
Alex,
I am working primarily using raw_image, image3[], image4[] and those are linear.
I am using dcraw_process as a reference and wanted to confirm what specific gamma is applied.
My guess would be that I need to set gamm correctly depending on my output colorspace. Applying BT.709 gamma when requesting a sRGB output probably returns subtly incorrect results.
Followup: margins/visible area *may* change in future (without notice) when/if we'll decide to switch from hardcoded margins to metadata (camera) -specified ones (for specific vendor, or camera subset)
So we strongly suggest to handle this in your app.
This change will break accurate per-channel black level calculation for (affected) Canon cameras.
It may be not your case (if your code uses own blacks calculated from raw data), but will affect other users and may result in excessive banding. So, no plans to implement this #ifdef in main LibRaw source code.
Taking into account the fact that margins may change with firmware update, it is better to change your application to handle this feature accurately.
I should add at this point that your suggestion that we should use the FULL FRAME for astronomical images doesn't work as this will result in corruption of the images where the "special" parts of the frame will corrupt the stacked image when "shift and add" processing (aka dithering), is being used.
If you won't remove this incompatible change, please could I ask that at the very least you make it configurable by passing a parameter to open, or at build time by using the pre-processor for example:
#if !defined(USE_LIBRAW19_MARGINS)
You still haven't explained why you made this change - it doesn't seem to provide great benefit and really messes things up for many existing users.
Further to my previous: The CFA pattern is strictly for the image area of the chip. Everywhere else there is no "CFA" because these pixels are either optically blackened or serve no function, or are 100% white, or something like that.
I really don't understand why this major incompatability has been forced on us it it serves no purpose, and as you haven't told us why it is *needed*, then I must conclude this change has no purpose.
OK I apologise for the diversion caused by sRaw/mRaw - I hadn't realised they weren't real Raw files!
However my point that changing to use the full image area would also be incompatible is still valid.
I would suggest if I may that instead of changing what was, that you could continue to return the same top/left margin values and filters value that 0.19 did on the assumption that the user WILL use the image margins as defined by firmware etc.. You could also provide an alternative filters value for people who wanted to use the whole sensor area or provide a simple API to return a suitable value for decoding the entire image:
If I change the code to use the full sensor area for all image types, then I will face an uproar from all the users it is just as bad as using a different margin.
Note that the images returning filters=0 are also rendered by LibRaw with incorrect colours? I note that these are not FULL size raw images, but mRaw in Canon parlance (medium size Raw).
These images also exhibit the same behaviour in 0.19.
*** UPDATE ***
I just spent 1/2 hour reading up about sRaw and mRaw - they aren't Raw at all!!! I can now see why they are reported with filters=0 and as a colour image. However the colours are still wrong!
*** END UPDATE ***
PS you still haven't explained to me why the previous behaviour needed to be changed.
The sample you've shared is Canon sRAW/mRAW (small raw). It is already debayered in camera, recording format is something like high-bit lossless JPEG (full resolution Y channel and subsampled Cb/Cr channels).
This format is indeed supported by LibRaw and rendered w/o problems: https://www.dropbox.com/s/tbcnn38wznojbpz/screenshot%202020-07-20%2018.5...
Could you please share CR2 samples w/ filters equal to 0 to check them up.
In D60 (and most other old canons) case, margins are not set by manufacturer, but hardcoded in libraw source. For metadata-specified margins there is always a chance that margins will change with firmware update.
So we (again) suggest to consider to use full sensor area for flats, darks, etc.
It gets worse - some .CR2 images from the camera come back with filters set to 0x00000000 so they are interpreted as full colour images not CFA so I can't extract the non-deBayered image data and de-Bayer it myself.
Please could you explain why you felt the need to override the margins as you do here. I understand that this impacts on the code for determing a pixels colour depending on whether you are looking at the full sensor area as compared to the active area - but why is that a problem?
The previous code respected the margins set by the manufacturer whereas you are changing the margins which has a massive impact on this application as previously processed master darks, flats etc will no longer be compatible - this is disastrous as users do not always retain all their original darks, etc. so won't be able to rebuild them.
Sure I could modify open.cpp but that becomes a perpetual problem as we would need to remember to change it every time we refreshed the LibRaw code.
Please, please give a lot of thought to reverting this to previous behaviour - this really is a HUGE compatibility issue.
Libraw 0.20 identifies the pattern for a Canon EOS 60D as GBRG, whereas 0.19 identified it as RGGB. It seems highly probable to me that this doesn't just apply to the EOS 60D.
So you saying that you incorrectly calculate the black levels because it needed a bit of work to do it properly so you made this change instead - shame on you.
You should revert this change and do the per channel black level calculation based on the image area not the full sensor area.
Quote from the previous message:
> if such data is created from dark frame values
In other words: change to LibRaw::open_datastream() code is not enough if one want to have correct per-channel BL estimation too.
Huh! Now I really don't understand - you said that my proposed change caused problem with the per channel black level calibration. Or in other words that the margins that were used by 0.19 and below caused a problem.
I am trying to understand WHY it causes a problem - if you calculate the per channel calibration on the image area (i.e. without the frame), surely it will be correct.
If you are saying the someone else is using LibRaw and is using the full frame area for astronomical image processing calibration frames then they are doing it wrong! And if they persuaded you to make this change to the margins, then you should revert it immediately.
David
It was a different statement: your proposed change (commenting out margins/filters adjustment) will result in incorrect interpretation of black level data (if such data is created from dark frame values)
I don't understand your statement that this change to the margin size (if odd) is necessary to ensure accurate per-channel black level calculation.
Surely if you apply the per channel calculation to the "Image" area (i.e. NOT including the frame) using the CFA pattern (filter) fr that area, then the calculation will be correct.
I can understand that if you used the CFA for the image area to calculate a per channel black level against the entire sensor area that might not work right, but I am sure you wouldn't do that.
So please explain why you believe my reasoning is incorrect.
unfortunately I could not understand the exact wording of the question you are asking.
Alex,
I've been thinking about this issue at some length and I am trying to get my head around your assertion that the change in question is necessary to ensure accurate per-channel black level calculation. Surely if you apply the per-channel calculation to the "image" area using the correct filter mask for that, then it will all be correct? I can see that if you applied the "image" area filter mask to the whole RAW including the frame area then things could go awry. But you know better that to make that error.
So please explain why you believe my reasoning is incorrect.
Thank you
Neither gamma, nor colour space chromaticities are relevant in a colour-managed workflow. For non-colourmanaged workflow (publishing to internet, for example), use sRGB D65 with simplified gamma 2.2, like Adobe are doing it. In any case, make sure the resulting profile is embedded into the output.
Alex,
I am working primarily using raw_image, image3[], image4[] and those are linear.
I am using dcraw_process as a reference and wanted to confirm what specific gamma is applied.
My guess would be that I need to set gamm correctly depending on my output colorspace. Applying BT.709 gamma when requesting a sRGB output probably returns subtly incorrect results.
Regards,
Dinesh
gamma curve is applied on final output stage (tiff/ppm writer, or make_mem_image)
If you're working with imgdata.image[] directly: these values are linear.
Followup: margins/visible area *may* change in future (without notice) when/if we'll decide to switch from hardcoded margins to metadata (camera) -specified ones (for specific vendor, or camera subset)
So we strongly suggest to handle this in your app.
We didn't make such list
OK - Now you've told me WHY you did it - I can see why you don't want to do that, and why I shouldn't do it either.
I'll advise my users of the incompatibility issues. Do you happen to have a list of the cameras that this change affects?
Apologies if this has delayed the final release of 0.20.
David
This change will break accurate per-channel black level calculation for (affected) Canon cameras.
It may be not your case (if your code uses own blacks calculated from raw data), but will affect other users and may result in excessive banding. So, no plans to implement this #ifdef in main LibRaw source code.
Taking into account the fact that margins may change with firmware update, it is better to change your application to handle this feature accurately.
I should add at this point that your suggestion that we should use the FULL FRAME for astronomical images doesn't work as this will result in corruption of the images where the "special" parts of the frame will corrupt the stacked image when "shift and add" processing (aka dithering), is being used.
If you won't remove this incompatible change, please could I ask that at the very least you make it configurable by passing a parameter to open, or at build time by using the pre-processor for example:
#if !defined(USE_LIBRAW19_MARGINS)
You still haven't explained why you made this change - it doesn't seem to provide great benefit and really messes things up for many existing users.
> The CFA pattern is strictly for the image area of the chip
This is not true, esp. for (affected) Canons
Further to my previous: The CFA pattern is strictly for the image area of the chip. Everywhere else there is no "CFA" because these pixels are either optically blackened or serve no function, or are 100% white, or something like that.
I really don't understand why this major incompatability has been forced on us it it serves no purpose, and as you haven't told us why it is *needed*, then I must conclude this change has no purpose.
OK I apologise for the diversion caused by sRaw/mRaw - I hadn't realised they weren't real Raw files!
However my point that changing to use the full image area would also be incompatible is still valid.
I would suggest if I may that instead of changing what was, that you could continue to return the same top/left margin values and filters value that 0.19 did on the assumption that the user WILL use the image margins as defined by firmware etc.. You could also provide an alternative filters value for people who wanted to use the whole sensor area or provide a simple API to return a suitable value for decoding the entire image:
unsigned fullImageFilters = getFullImageFilters();
Not a D60 this is a 60D - quite different.
If I change the code to use the full sensor area for all image types, then I will face an uproar from all the users it is just as bad as using a different margin.
I've put a sample image on DropBox
https://www.dropbox.com/s/q9jf90yfzy5fudo/_MG_9458.CR2?dl=0
Note that the images returning filters=0 are also rendered by LibRaw with incorrect colours? I note that these are not FULL size raw images, but mRaw in Canon parlance (medium size Raw).
These images also exhibit the same behaviour in 0.19.
*** UPDATE ***
I just spent 1/2 hour reading up about sRaw and mRaw - they aren't Raw at all!!! I can now see why they are reported with filters=0 and as a colour image. However the colours are still wrong!
*** END UPDATE ***
PS you still haven't explained to me why the previous behaviour needed to be changed.
The sample you've shared is Canon sRAW/mRAW (small raw). It is already debayered in camera, recording format is something like high-bit lossless JPEG (full resolution Y channel and subsampled Cb/Cr channels).
This format is indeed supported by LibRaw and rendered w/o problems: https://www.dropbox.com/s/tbcnn38wznojbpz/screenshot%202020-07-20%2018.5...
Could you please share CR2 samples w/ filters equal to 0 to check them up.
In D60 (and most other old canons) case, margins are not set by manufacturer, but hardcoded in libraw source. For metadata-specified margins there is always a chance that margins will change with firmware update.
So we (again) suggest to consider to use full sensor area for flats, darks, etc.
It gets worse - some .CR2 images from the camera come back with filters set to 0x00000000 so they are interpreted as full colour images not CFA so I can't extract the non-deBayered image data and de-Bayer it myself.
Please could you explain why you felt the need to override the margins as you do here. I understand that this impacts on the code for determing a pixels colour depending on whether you are looking at the full sensor area as compared to the active area - but why is that a problem?
The previous code respected the margins set by the manufacturer whereas you are changing the margins which has a massive impact on this application as previously processed master darks, flats etc will no longer be compatible - this is disastrous as users do not always retain all their original darks, etc. so won't be able to rebuild them.
Sure I could modify open.cpp but that becomes a perpetual problem as we would need to remember to change it every time we refreshed the LibRaw code.
Please, please give a lot of thought to reverting this to previous behaviour - this really is a HUGE compatibility issue.
David
I understand your pain.
As stated in Changelog:
* Bayer images: ensure that even margins have the same COLOR() for both the active sensor area and the full sensor area.
The only way to achieve this is to ensure that left/top margins are even for bayer images and multiply of 6 for X-Trans images.
Please note, that margins can also change due to firmware update (if margins are read from file metadata).
So, the only way to ensure valid darks, master flats, etc etc is to use full sensor area, not active (visible) area for such data.
You may be pleased to know that very few cameras are affected by this change.
Also you may rollback this change by commenting out these lines in src/utils/open.cpp: https://github.com/LibRaw/LibRaw/blob/master/src/utils/open.cpp#L618-L640
This will not preserve from margins change due to firmware update.
Libraw 0.20 identifies the pattern for a Canon EOS 60D as GBRG, whereas 0.19 identified it as RGGB. It seems highly probable to me that this doesn't just apply to the EOS 60D.
The image processes incorrectly as a result :(
Pages