Recent comments

Reply to: Problem with dng file from Pentax K5-II   5 years 5 months ago

Dear Sir:

Quoting Adobe again, from a previous, "The origin of this pattern is the top-left corner of the ActiveArea rectangle. The values are stored in row-column-sample scan order". How do you interpret this? Please draw two different Bayer patterns and suggest how the tag should be filled based on the description above.

Reply to: Problem with dng file from Pentax K5-II   5 years 5 months ago

Dear Sir:

Have you already looked at processing of pertinent EXIF tags under "case 0xc619" and "case 0xc61a"? They simply read the EXIF tags, and thus my reference to Adobe DNG Specification is quite justified. Please read Adobe documentation and try to understand it.

I don't see the reason for your anger, but in any case please refrain from showing it here in future.

Reply to: Problem with dng file from Pentax K5-II   5 years 5 months ago

Well, actually I am asking about the color.cblack array provided by YOU to my application.

Clearly you think you're justified in saying that isn't something you answer questions on.

If I gave answers like that to my customers when I was providing tech support I would have been fired (quite rightly) by my management.

D.

Reply to: Problem with dng file from Pentax K5-II   5 years 5 months ago

Dear Sir:

In my previous post I quoted Adobe documentation on the matter. I'm afraid you are not paying attention. Your question is about EXIF, not LibRaw.

Reply to: Problem with dng file from Pentax K5-II   5 years 5 months ago

You know the purpose of documentation is so other people can read use and understand your stuff. It's no great surprise that you understand it!

You still didn't answer my question about the order of values in cblack[0-3] versus cblack[6-9]. I *think* that it is RGBG for cblack[0-4] and RGGB for cblack[6-9].

Of course if you had deigned to answer my questions in the first place instead of almost going out of your way to avoid doing so - saying along the way "Well I understand it ..." that would have saved me a lot of time which is what this forum is supposed to be about.

Reply to: LibRaw 0.20 supported cameras   5 years 5 months ago

Dear Sir:

I think the place to ask is https://chdk.fandom.com/wiki/CHDK

Reply to: LibRaw 0.20 supported cameras   5 years 5 months ago

PowerShot SX20 IS (CHDK hack)

Please advise what settings to use with CHDK hack, etc. in order for this to work properly.
Thank You!

Reply to: Problem with dng file from Pentax K5-II   5 years 5 months ago

Dear Sir:
I don't find the description lacking.
For parsing cblack in DNG and for order used in EXIF tags please see DNG specification: "This tag specifies the zero light ... encoding level, as a repeating pattern. The origin of this pattern is the top-left corner of the ActiveArea rectangle. The values are stored in row-column-sample scan order."
Reading DNG specification may help answering your questions.
For DNG, cblack is filled under "case 0xc619" in parse_tiff_ifd.

Reply to: Problem with dng file from Pentax K5-II   5 years 5 months ago

Still seeking clarification here ... What order are the first four cblack elements in? RGBG, RGGB?

What order they in for cblack[6]-[9]?

What order are they supposed to be in for the exif tags?

Is there any clear write up explaining the cblack value array other than the code and the rather terse decription in the documentation which doesn't explain much ...?

Thanks

Reply to: Problem with dng file from Pentax K5-II   5 years 5 months ago

I'm now using adjust_bl() before doing my black level subtraction, and I'm seeing:

00000094 2019/06/24 14:33:18.354 018788 00002900             >Before adjust_bl() C.black = 0.
00000095 2019/06/24 14:33:18.363 018788 00002900             >First 10 C.cblack elements
00000095 2019/06/24 14:33:18.363 018788 00002900             >0, 0, 0, 0
00000095 2019/06/24 14:33:18.363 018788 00002900             >2, 2
00000095 2019/06/24 14:33:18.363 018788 00002900             >513, 513, 515, 516
00000096 2019/06/24 14:33:18.372 018788 00002900             >Subtracting black level of C.black = 513 from raw_image data.
00000097 2019/06/24 14:33:18.382 018788 00002900             >First 10 C.cblack elements
00000097 2019/06/24 14:33:18.382 018788 00002900             >516, 515, 513, 513
00000097 2019/06/24 14:33:18.382 018788 00002900             >0, 0
00000097 2019/06/24 14:33:18.382 018788 00002900             >513, 513, 515, 516

should I expect the order of cblack[0]-cblack[4] to be the reverse of cblack[7]-cblack[9]???

I'd have expected it to be in the same order as the levels reported by exiftool (513, 513, 515, 516)

Reply to: Problem with dng file from Pentax K5-II   5 years 5 months ago

> I'd much appreciate some explanation of how cblack really works and how I should process these when operating on the raw image data?

You also didn't address that at all.

Reply to: Problem with dng file from Pentax K5-II   5 years 5 months ago

However variable black is ZERO and I don't believe it should be

Reply to: Problem with dng file from Pentax K5-II   5 years 5 months ago

Not 0,
513, 513, 515, 516

Reply to: What to do with rgb_cam matrix   5 years 5 months ago

I understand that you may be feeling "got at" here, but to be fair the OP does have a point in that the code definitely does suffer from a distinct paucity of comments - and while you say the libraw focus is on extracting data from raw files, there's still a very high proportion of the code dedicated to post-processing the raw image.

I truly don't expect you to drop everything and add reams of comments but if the next time you go near any code that lacks comments, adding some lines to explain how/why it's doing what it does would be most helpful to the rest of us.

Cheers

Reply to: White balance scaling and values in pre_mul array   5 years 5 months ago

Well that's a bitch something munged up the code I was requesting a sanity check on.

However it was (it turns out) mostly correct. Though I changed the scale_mul calculation to use dmin instead of dmax.

Cheers

Reply to: What to do with rgb_cam matrix   5 years 5 months ago

Dear Sir:

Please don't put words into my mouth. We put a lot of effort into saving people some time, but we assume that libraw users have a good idea of what raw files are and how to process them, or use our samples, or learn as they go - if they want to implement their own processing but have no experience and theoretical knowledge necessary. Libraw by itself is not meant to be a textbook on raw decoding or raw conversion.

Meanwhile I answered your question right away, pointing out that your problem with magentish highlights here https://imgur.com/F3FjABw are because of how you normalize (denominator) and clip.

I will leave it at it.

Reply to: What to do with rgb_cam matrix   5 years 5 months ago

"Do your own research like I did mine" is not a good reason to keep large chunks of code absolutely undocumented to the point of inscrutability, mostly not for code that went through a couple of decades and will go through many more. None of us are going to devote 15 years of our lives towards reinventing the wheel and figuring out how you guys did some particular demosaicing algorithm that only you have and works great but doesn't begin to explain what it does and isn't documented anywhere. If you put so much work into researching and making something and shared it you might as well enable people to understand what was done and learn from it rather than telling them to disregard the work you present and to painstakingly retrace your steps or telling them "these large parts of our library are outside of the scope of our library so whatever".

It's not like the parts of the library that are within the core scope of the library are documented either, maybe you're all just too used to dealing with a practically comment-free 21 kLoC file (btw that's at least 20 times too long for a single file) to see what's the problem with that approach. But if you guys aren't interested in documenting 22 years of work beyond telling people how to use it on a basic level then I'm not going to convince you of anything. I'm not telling you you should write an encyclopedia about every function either, and I understand that as a commercial developer you have better things to do than to document an open source project, but a few comments here and there whenever you get the chance to help clarify why things are being done would go a long way, and it might just help you in the future when the meaning of tens of thousands of lines of esoteric code isn't fresh in your mind anymore. It's probably too late for that anyway, most of the people who wrote that stuff have probably long since moved on or forgotten how everything they wrote works.

Reply to: What to do with rgb_cam matrix   5 years 5 months ago

LibRaw is not about processing, it is about decoding raw and some of the relevant metadata. If you don't want to do your own research it's your choice. I was doing mine for more than 15 years, and still continue to do it.

Reply to: What to do with rgb_cam matrix   5 years 5 months ago

I'm not sure why you were cryptic about whether my description of what I should do is right, but anyway I tried it and it seems about right, see https://i.imgur.com/gidxryK.png (mine seems to have slightly brighter highlights compared with the rest of the picture, maybe my normalisation is better. I didn't have to do anything after the matrix multiplication btw, I guess the matrix is normalised). I also don't understand why you talk so much about normalisation given that it wasn't relevant to my problem, I didn't have to change anything about normalisation at all, knowing the correct order of operations and knowing to clip (and when) was what was crucial.

I didn't look at sample code, maybe I should have, I just didn't think of doing that because I'd expect such examples to show how to call functions, not how to replicate them, which is what I'm trying to do in my own mostly GPGPU-based way. Instead I debugged through libraw_dcraw_process() to see what it does, and let's just say that it's rather difficult to learn anything doing that given that the 21,420 lines of code in draw_common.cpp are hardly self-documenting, not to mention there are some things with which even the debugger is lost, like accessing the elements of variables (in libraw_cxx.cpp) such as O. or C. or S..

I don't blame you guys for the lack of explanatory comments in the source since I know dcraw was the same way, but while it's good that you documented the API, I think that libraw, as the main successor of dcraw, deserves more because it's so important, your work will outlive you, it's the go-to thing for processing images from any camera, and when we're all dead of old age digital archaeologists will be looking into whatever succeeds libraw to know how to decipher ancient digital photographs, not just how to use it because even now the way it does things shows its age. And so the problems I'm facing they will face too unless the people who understand the code document what it does. Just look at a function like recover_highlights(), I'm sure it does something quite good, but how would anyone know what it does? It doesn't have a single comment explaining what's going on, how is anyone supposed to maintain that? Look at it https://github.com/LibRaw/LibRaw/blob/master/internal/dcraw_common.cpp#L... and see how much you need to use your memory to remember what's the purpose of each of the blocks of loops. Even I am not fond of adding comments, but for blocks of code I'll put a small comment that says what the block of code below does, it just makes everything seem instantly clearer and less daunting.

Let's say someone (e.g. me) thinks "libraw is good, but I want to do most of the processing in floating-point arithmetic in real time OpenCL. I don't want to do original research into every aspect of processing, I just want to do what dcraw/libraw does for every phase of processing", then I need to understand what the code does, not just how to use it. I looked into a couple of the better demosaicing algorithms that libraw provides and it seems like there's literally nothing about at least one of these anywhere on the Internet. Some of them have a few comments, but only in Russian! All we have is the code itself and little to help us understand what it does let alone why it does it. I don't mean to criticise you guys, I really appreciate your hard work and how you guys answer all our questions, we'd have a much harder time without your help, I'm just saying that ideally you should write down what you understand about the code as comments while you still remember what it does. As it is the inner workings of libraw could hardly be more arcane.

Reply to: Decoding RAW without Bayer interpolation to single greyscale   5 years 5 months ago

I want to characterize my sensor. For that i need raw data but using MATLAB it give only bayer pattern pixel format. The Bayer pattern display three bands/sheets separately in one display but i need one raw image display. So, my question is choosing pixel format BayerRG16 and then displaying image in gray scale is better option?
If not then how i can i get raw images ?

Reply to: What to do with rgb_cam matrix   5 years 5 months ago

1.0 is after normalization, so if you read my reply above you should be asking yourself, "what is the correct denominator for normalization?"
How to do it right? - but there is a lot of open source code available, including in libraw. Have you looked at the code in our samples folder? Also, there are books and papers available.

Reply to: What to do with rgb_cam matrix   5 years 5 months ago

> Why are you normalizing to 1 and after that ruining normalization with multiplication?
As opposed to doing what? It's hard for me to explain what I'm doing wrong if no one explains how to do things right. The colour space matrix multiplication is kind of an afterthought to the way I'm doing this, what I do makes more sense if you exclude it. Every stage of my processing is meant to be visible with a rather consistent approach to gain, so naturally it starts with subtracting black levels and normalisation. I divide the WB coefs from cam_mul by their minimum (green, because I don't understand why the green coef should ever be less than 1.0), so that green stays normalised while red and blue can go higher than 1.0 but are clipped on screen (but not in the data) so that at that stage highlights are white on screen but purple in the data.

Is it after the white balance multiplication that I should clip the data to 1.0? I don't like that it means I'm throwing data away, but I guess that makes sense if I'm not going to do highlight recovery and that the matrix multiplication will make the higher red and blue values push green down. So I guess I should clip, do the colour space conversion, then make things normalised again by dividing everything by the lowest value from the result of (1 , 1 , 1) × the matrix (and then clip again?).

Reply to: What to do with rgb_cam matrix   5 years 5 months ago

Why are you normalizing to 1 and after that ruining normalization with multiplication? Not to mention that your justification for multiplication the way you do it is not convincing.
If you have green at clipping already in raw data, and white balance promotes red and blue channels, you have minus green, that's magenta / purple, as an magentish highlights. So the question is: what clipping value to use? Suppose you have a 14-bit raw, and the green channel clips at the value of 14000 (that's before subtracting black).

Reply to: What to do with rgb_cam matrix   5 years 5 months ago

"You are multiplying _after_ normalization"
I'm not sure what you mean and I don't understand the relevance, since normalisation doesn't do much besides scaling values.

"you are not applying proper clipping"
Well, what is proper clipping? I thought I'm supposed to do everything and then clip at the very end, but clearly that won't solve the dark purple highlights. Am I rather supposed to boost the values of maxed out pixels?

Reply to: What to do with rgb_cam matrix   5 years 5 months ago

2) It's a pretty sensible step if you think about interpreting the bayer image from a signal processing point of view, each red or blue channel has 1 pixel set for every 4 pixels, so if you're going to use any kind of low pass filtering then in order to have the correct energy you need to multiply everything by 4 (or 2 for green) to create a valid intermediary representation so that the average pixel values for the whole image would be the same before and after such low pass filter-based demosaicing, like blurring, halving or interpolating, because each set pixel will have to be averaged with 3 black pixels. I wouldn't call that white balance at all, the image still comes off as mostly green and applying the white balance coefs makes it look right albeit desaturated if I don't do the colour space conversion.

1 & 3) So you are saying that white balance should be done before the matrix multiplication? This perplexes me because usually white balance correction in an editor is done by the user towards the end of the processing chain, as in your editor gives you a debayered colour space-converted defringed rectilinearised image and then it applies white balance but if you don't like it you can do your own additional white balance which is a simple rgb multiplication, or so it seems to be since it seems WB adjustment is a lightweight operation. But to be consistent any WB change by the user in an editor would have to be done before even debayering and everything else would have to be done all over again for each user adjustment of WB. It sounds like that's what should correctly be done, right?

The reason I made this thread is that I first tried doing WB correction followed by the matrix multiplication, but in some cases it gives me rather dark purple highlights, so I thought there must be something wrong.

Here's my processing with WB followed by matrix multiplication: https://i.imgur.com/F3FjABw.png
Here's the libraw_dcraw_process() output with use_camera_wb set to 1 and highlight set to 0 (clip, so that means no reconstruction, right?): https://i.imgur.com/Y2uxhWl.png

If all 3 channels are maxed out but WB correction boosts red and blue and the matrix multiplication essentially increases the saturation then I guess it makes sense that it would turn out that way, but clearly I'm missing something rather important to avoid getting purple street lamps.

Pages