I will re-read the ninedegrees-article, still wondering about Alex' statement about normalization/scale-to-max, though.
It is possible that I checked the black-values before they got populated, which would explain why they turned up 0.
Thanks, I remember having read that niedegreesbelow article before, but I will go through it again. I must have mixed up something.
> "seems like black levels aren't set in my images" - what camera is it, please?
I have tested with various firmware versions from Sony A7r3 and also (if I remember correctly, I may be wrong there) with Canon 800d, 750d and others. The above statement is definitely true for A7r3, though.
> Generally, if you want quality: use floating point, apply white balance and some gamma / tone curve before demosaicking
Agreed on floating point, that will be a (simple) step ASAP. I am interested in the gamma curve statement, though, since my goal is to match about 12 different camera types as good as possible by NOT messing with "random" (excuse the term) curves. I would like to have the process as documented, stable, reproducible and non-"guessed" as I can manage.
> calculate and use appropriate forward colour transforms
This is most probably the step where I have broken something: My conversion (see above) from "camera color space" to anything more or less well-defined (sRGB is only for testing, it is NOT a "good" colorspace of course, I am targeting linear ACES in the end).
What about my question about normalization? Alex said that normalization is to be done after demosaicing, while that sounds not quite right to me (from my understanding you actually NEED to normalize - "scale to max" - the mosaic image before demosaic, but Alex said the opposite in the note mentioned above).
The main difference I see is actually about the "spreading" of the values: If we "cut"/clip the black end (lower end) and move data to the upper end (maximize values by multiplying with "maximum") normalization here must create a different set of colors. If we would normalize BEFORE cutting/clipping, that effect would be smaller. If we normalize AFTER demosaicing, using 3 channels in parallel, we'd only really change luminance, not color.
dcraw_process: This is borrowed from dcraw proper, please make sure you've read https://ninedegreesbelow.com/files/dcraw-c-code-annotated-code.html
"seems like black levels aren't set in my images" - what camera is it, please?
Generally, if you want quality: use floating point, apply white balance and some gamma / tone curve before demosaicking, calculate and use appropriate forward colour transforms (device to destination colour space, sRGB/Adobe RGB, etc.). dcraw_process is sort of a hack, you may want to skip it altogether.
I don't see why not to check demosaicking first. And while I appreciate your intent, no, I can't assume anything about anybody. I always prefer to know the exact goal, and to start with discussing the setup.
Merely quoting from the 5-years-old question: " I'd like to use this image for sensor testing purposes, and the object is pure B&W." - without knowing the exact setup, the best we can do is suggest things to try. Trying out what quality you get from the "scale BW" approach I outlined only costs about 5 minutes to write and check. Trying out an alternative "convert to HSV and use the V channel" (or Lab or whatever) approach costs another 5-10 minutes. So after half an hour you know what's best :-)
No need to discuss the necessity of demosaicing at all, I just wanted to share a possible approach to the actual problem described. I fully agree that in most scenarios you will want to go RGB->Gray. However, the thread opener was talking about "sensor checking", so I feel comfortable to assume that he knows what he is doing ...
IMO a good thing is to start with stating the purpose. Without that, and without an estimate of the allowed error, nothing can be done. For sensor characterization having light spectrum changing across the scene is a matter of concern. Exact experimental setup and flat field are the first order of business. Before that, any discussion of the need in demosaicking is premature.With decent setup demosaicking can be skipped, but the lightness error introduced by demosaicking is typically less than 2 dE.
I agree - you can always get a hue introduced from lighting conditions, lens CA etc. No doubt!
Yet, the answer to the specific question really should have been "within your specific setup, you can actually skip the color interpolation / demosaicing step".
I say this because no matter how good your interpolation process (demosaic) is, it is an interpolation after all, it "guesses" (mathematically calculates) values that aren't really there. With the specific, narrow use case of shooting BW objects (ideally in controlled lighting conditions), you would most likely get a better BW result by skipping the color interpolation.
(I have done this and can say from experience that it does give you more fine detail than any RGB interpolation I have tried ;) )
Since the question was actually about black and white objects, the - otherwise correct! - caveat of images possibly having color patches in them does not apply.
The question should instead be answered with: Libraw currently does not have such a function and due to its generalistic approach never will have, yet building a solution is rather easy and straight forward: If you know that your objects are reliably black and white anyway, just shoot a gray image, read out the channel differences (i.e. the "scaling" introduced by the color filters) and re-multiply with the appropriate scales.
Thanks for your reply Alex and happy to read that this is not normal.
My CPU is Intel(R) Core(TM) i7-6500U CPU @ 2.50GHz (4 CPUs), ~2.6GHz
I will try to compile your samples to see if I get the same result. At the moment, I just tried to use LibRaw by adding source code (internal, src and libraw folders) directly to my project, then builded it in release. The project is a C++/CLR dll that I call in C#, may be there is something that prevent optimizations. I will check and let you know :)
No, this is not normal (for unpack):
Simple test: - add return 0 to simple_dcraw.cpp test just before dcraw_process() called (so, only unpack)
$ time ./bin/simple_dcraw ~/CR2/IMG_0058.CR2 (this is Canon 6D MarkII image, 24Mpix)
time: real 0m1,310s
test computer: Intel(R) Core(TM) i3-7100U CPU @ 2.40GHz , 8GB RAM.
:-)
In my world, white balance *IS* about color ^_^
I will re-read the ninedegrees-article, still wondering about Alex' statement about normalization/scale-to-max, though.
It is possible that I checked the black-values before they got populated, which would explain why they turned up 0.
Mact
For ILCE-7RM3 typical black level is 512. If it is not so in your process, I would fix it first thing.
Gamma curve "statement" is based on error / artifacts analysis.
Forget colour for the moment, make other things, including white balance, work.
Thanks, I remember having read that niedegreesbelow article before, but I will go through it again. I must have mixed up something.
> "seems like black levels aren't set in my images" - what camera is it, please?
I have tested with various firmware versions from Sony A7r3 and also (if I remember correctly, I may be wrong there) with Canon 800d, 750d and others. The above statement is definitely true for A7r3, though.
> Generally, if you want quality: use floating point, apply white balance and some gamma / tone curve before demosaicking
Agreed on floating point, that will be a (simple) step ASAP. I am interested in the gamma curve statement, though, since my goal is to match about 12 different camera types as good as possible by NOT messing with "random" (excuse the term) curves. I would like to have the process as documented, stable, reproducible and non-"guessed" as I can manage.
> calculate and use appropriate forward colour transforms
This is most probably the step where I have broken something: My conversion (see above) from "camera color space" to anything more or less well-defined (sRGB is only for testing, it is NOT a "good" colorspace of course, I am targeting linear ACES in the end).
What about my question about normalization? Alex said that normalization is to be done after demosaicing, while that sounds not quite right to me (from my understanding you actually NEED to normalize - "scale to max" - the mosaic image before demosaic, but Alex said the opposite in the note mentioned above).
The main difference I see is actually about the "spreading" of the values: If we "cut"/clip the black end (lower end) and move data to the upper end (maximize values by multiplying with "maximum") normalization here must create a different set of colors. If we would normalize BEFORE cutting/clipping, that effect would be smaller. If we normalize AFTER demosaicing, using 3 channels in parallel, we'd only really change luminance, not color.
Or am I wrong somewhere?
Mact
dcraw_process: This is borrowed from dcraw proper, please make sure you've read https://ninedegreesbelow.com/files/dcraw-c-code-annotated-code.html
"seems like black levels aren't set in my images" - what camera is it, please?
Generally, if you want quality: use floating point, apply white balance and some gamma / tone curve before demosaicking, calculate and use appropriate forward colour transforms (device to destination colour space, sRGB/Adobe RGB, etc.). dcraw_process is sort of a hack, you may want to skip it altogether.
I don't see why not to check demosaicking first. And while I appreciate your intent, no, I can't assume anything about anybody. I always prefer to know the exact goal, and to start with discussing the setup.
I do agree :)
Merely quoting from the 5-years-old question: " I'd like to use this image for sensor testing purposes, and the object is pure B&W." - without knowing the exact setup, the best we can do is suggest things to try. Trying out what quality you get from the "scale BW" approach I outlined only costs about 5 minutes to write and check. Trying out an alternative "convert to HSV and use the V channel" (or Lab or whatever) approach costs another 5-10 minutes. So after half an hour you know what's best :-)
No need to discuss the necessity of demosaicing at all, I just wanted to share a possible approach to the actual problem described. I fully agree that in most scenarios you will want to go RGB->Gray. However, the thread opener was talking about "sensor checking", so I feel comfortable to assume that he knows what he is doing ...
Mact
IMO a good thing is to start with stating the purpose. Without that, and without an estimate of the allowed error, nothing can be done. For sensor characterization having light spectrum changing across the scene is a matter of concern. Exact experimental setup and flat field are the first order of business. Before that, any discussion of the need in demosaicking is premature.With decent setup demosaicking can be skipped, but the lightness error introduced by demosaicking is typically less than 2 dE.
I agree - you can always get a hue introduced from lighting conditions, lens CA etc. No doubt!
Yet, the answer to the specific question really should have been "within your specific setup, you can actually skip the color interpolation / demosaicing step".
I say this because no matter how good your interpolation process (demosaic) is, it is an interpolation after all, it "guesses" (mathematically calculates) values that aren't really there. With the specific, narrow use case of shooting BW objects (ideally in controlled lighting conditions), you would most likely get a better BW result by skipping the color interpolation.
(I have done this and can say from experience that it does give you more fine detail than any RGB interpolation I have tried ;) )
Mact
Since the light "colour" uniformity is not that simple of an issue, having b/w objects doesn't guarantee the image taken is uniformly b/w.
Since the question was actually about black and white objects, the - otherwise correct! - caveat of images possibly having color patches in them does not apply.
The question should instead be answered with: Libraw currently does not have such a function and due to its generalistic approach never will have, yet building a solution is rather easy and straight forward: If you know that your objects are reliably black and white anyway, just shoot a gray image, read out the channel differences (i.e. the "scaling" introduced by the color filters) and re-multiply with the appropriate scales.
Mact
Drupal messed up the code, sorry for that - I thought I double-checked. If you are having problems understanding the pseudocode, let me know, please!
Mact
Thanks for your reply Alex and happy to read that this is not normal.
My CPU is Intel(R) Core(TM) i7-6500U CPU @ 2.50GHz (4 CPUs), ~2.6GHz
I will try to compile your samples to see if I get the same result. At the moment, I just tried to use LibRaw by adding source code (internal, src and libraw folders) directly to my project, then builded it in release. The project is a C++/CLR dll that I call in C#, may be there is something that prevent optimizations. I will check and let you know :)
No, this is not normal (for unpack):
Simple test: - add return 0 to simple_dcraw.cpp test just before dcraw_process() called (so, only unpack)
$ time ./bin/simple_dcraw ~/CR2/IMG_0058.CR2 (this is Canon 6D MarkII image, 24Mpix)
time: real 0m1,310s
test computer: Intel(R) Core(TM) i3-7100U CPU @ 2.40GHz , 8GB RAM.
What is your 'Intel i7' CPU, what specific model?
The sum of the two calls unpack()+dcraw_process().
unpack() is about 8-10s (11 sec for a 24mp DNG) and dcraw_process() is around 8-10s too.
except for a raw file from Sony a6500 where unpack() take only 0.4 sec. May be uncompressed pixel data?
Are the timing normal? Are there any tips to reduce the loading time?
Thank you.
What do you measure, is it unpack() or unpack()+dcraw_process()?
Fujifilm XF10 is not in LibRaw 0.19 supported camera list: https://www.libraw.org/supported-cameras
It is supported in current public snapshot (https://www.libraw.org/supported-cameras-snapshot-201903 ), please upgrade to it.
We plan to release new public snapshot this Fall.
It is takes time to test/check new decoders/new formats in every aspect.
thank you. not sure how I missed it.
guess because that sample is not included in solution file...
openbayer_sample.cpp is in samples folder
open_bayer() is still in libraw_cxx.cpp (line 1147 and below)
Pls make sure you use LibRaw 0.19, not older version.
I use 0.19.5-Win32
What LibRaw version you use?
I dont see sample open_bayer or LibRaw::open_bayer function. was it removed?
https://www.libraw.org/comment/5233#comment-5233
Hi there,
Would you please tell me if there is a plan to support the new-release Canon 90D and Canon CR3 RAW format on Windows 10?
Thanks,
Xinyi
Sorry, I misunderstood your first answer. I have everything working correctly now.
Once again, thanks for your help.
Amanda
Pages