I sent a project to your email. Please note that checkMetadataValue is written so, that if the value is absent then it should return "Missing", also note that even though it extracts altitude, after failing to extract longref it stops and does nothing with the rest of the file.
Also, I am concerned about other fields in GPS, for instance longitude returns 0s no matter what image I try.
I can't find a way to attach a file, but here is a sample code that fails. I an extracting a lot of information so I am putting everything into a stream.
I can reverse engineer Adobe code, but I still need to know what the cam_xyz matrix exactly converts to. It converts camera space to XYZ. But is that with respect to a reference white or not? If so, which? Hopefully this question makes sense.
For WB multipliers conversion to/from 'color temperature' please look into Adobe DNG SDK source (or into RawTherapee source), it it shorter way than translating Adobe code into english in this forum thread.
Good evening Alex. Would you kindly guide me a few steps forward with your suggestion? In dcraw I find the place where different color space is written. If I want to create something like linear to log, where would I change numbers? Where would I found these figures?
Thanks in advance
Yes, postprocessing code (after raw data read/unpack) is completely imported from dcraw without much improvement in functions.
Both 1D and 3D luts looks easy to implement assuming you've working code that replaces pixel value in place.
Assuming you're working in linear space in 'camera color', the best place to implement your code is convert_to_rgb_loop() function. It gets linear profile (out_cam[3][4]) that converts from camera space to output rgb. You may replace it with your code.
This function is already 'virtual', so it is very easy to implement any color conversion in derived class.
I see. It was worth a try :). For what it,s worth, the same limitations occur in dcraw so I assume the code is coming from there.
Sorry all questions but do you perhaps know a starting point being able to add 1D and/or 3D luts in dcraw/libraw?
I am curious if it is possible to add any other info to the output such as color space? A bit contradictory since an icc should contain colorspace information one would think?
Yes, I did. It works(I think) and applies the icc profile. Gamma is correct but colors are faded.
My output command goes like this
dcraw_emu -T -a -p profile.icc my.dng
My output when running dcraw_emu in terminal looks like this
LibRaw-0.17.2/bin/.libs/dcraw_emu [OPTION]... [FILE]...
-c float-num Set adjust maximum threshold (default 0.75)
-v Verbose: print progress messages (repeated -v will add verbosity)
-w Use camera white balance, if possible
-a Average the whole image for white balance
-A <x y w h> Average a grey box for white balance
-r <r g b g> Set custom white balance
+M/-M Use/don't use an embedded color matrix
-C <r b> Correct chromatic aberration
-P <file> Fix the dead pixels listed in this file
-K <file> Subtract dark frame (16-bit raw PGM)
-k <num> Set the darkness level
-S <num> Set the saturation level
-n <num> Set threshold for wavelet denoising
-H [0-9] Highlight mode (0=clip, 1=unclip, 2=blend, 3+=rebuild)
-t [0-7] Flip image (0=none, 3=180, 5=90CCW, 6=90CW)
-o [0-5] Output colorspace (raw,sRGB,Adobe,Wide,ProPhoto,XYZ)
-o file Output ICC profile
-p file Camera input profile (use 'embed' for embedded profile)
-j Don't stretch or rotate raw pixels
-W Don't automatically brighten the image
-b <num> Adjust brightness (default = 1.0)
-q N Set the interpolation quality:
0 - linear, 1 - VNG, 2 - PPG, 3 - AHD, 4 - DCB
5 - modified AHD,6 - AFD (5pass), 7 - VCD, 8 - VCD+AHD, 9 - LMMSE
10-AMaZE
-h Half-size color image (twice as fast as "-q 0")
-f Interpolate RGGB as four colors
-m <num> Apply a 3x3 median filter to R-G and B-G
-s [0..N-1] Select one raw image from input file
-4 Linear 16-bit, same as "-6 -W -g 1 1
-6 Write 16-bit linear instead of 8-bit with gamma
-g pow ts Set gamma curve to gamma pow and toe slope ts (default = 2.222 4.5)
-T Write TIFF instead of PPM
-G Use green_matching() filter
-B <x y w h> use cropbox
-F Use FILE I/O instead of streambuf API
-timing Detailed timing report
-fbdd N 0 - disable FBDD noise reduction (default), 1 - light FBDD, 2 - full
-dcbi N Number of extra DCD iterations (default - 0)
-dcbe DCB color enhance
-eeci EECI refine for mixed VCD/AHD (q=8)
-esmed N Number of edge-sensitive median filter passes (only if q=8)
-acae <r b>Use chromatic aberrations correction
-aline <l> reduction of line noise
-aclean <l c> clean CFA
-agreen <g> equilibrate green
-aexpo <e p> exposure correction
-dbnd <r g b g> debanding
-mmap Use mmap()-ed buffer instead of plain FILE I/O
-mem Use memory buffer instead of FILE I/O
-disars Do not use RawSpeed library
-disinterp Do not run interpolation step
-dsrawrgb1 Disable YCbCr to RGB conversion for sRAW (Cb/Cr interpolation enabled)
-dsrawrgb2 Disable YCbCr to RGB conversion for sRAW (Cb/Cr interpolation disabled)
-disadcf Do not use dcraw Foveon code either if compiled with demosaic-pack-GPL2
Ok, thanks for answers.
By the way. maybe you can answer. I,ve been exlporing the use of icc profiles in dcraw and also tried this in dcraw_emu. This works with -p [profile.icc] but I never get the colors right. It,s always washed out. Almost like dcraw is defaulting to raw color when using icc profiles? when trying the same profiles in photoshop or lightroom the colors are always right. Would be nice to have your point of view about this issue.
Thanks again.
/D
Hi!
Just compiled and tried out some different stuff in bin folder. Checked into dcraw_emu and I am used to work with the piping command in dcraw.
-c Write image data to standard output
Is piping working somehow with libraw or what,s the reason it isn,t?
Thanks
/D
White balance and black level data can be retrieved after open_file (cam_mul, black and cblack[]).
Unfortunately, auto-brightening uses local variables, you need to modify copy_mem_image() code to save parameters somewhere if you use dcraw_make_mem_image() call.
Turning of the the automation is giving me very bad images compared to the automated once. So, I would like to work with the automation but use the first images parameters for the rest of the set. Is there any way that i could retrieve the computed parameters(not the flags) by libraw and use them for the next RAW image? I don't want the parameters to be computed again and again for each raw file!!
To completely turn off any 'automation', you need to
- use same white balance coefficients (set params.user_mul to same values to make sure)
- use same black level (params.user_black)
- use same saturation level (params.user_sat) or do not use automated saturation calculation
(params.adjust_maximum_thr = 0)
- do not use auto_brigtening (as described above)
The images taken are a series of photos even if i avoid the auto parameters i still find a significant difference in the radiometry so i am thinking there are some parameters that use the content of the image. As you know in this case the nature of the image is considered which gives me very different images radio-metrically. To avoid this i need to understand which uses the image content to perform enhancement which doesn't. Could you provide me something on this please.
Different camera vendors use very different makernotes format, so there is no generic interface in LibRaw to get makernotes access.
Some of makernotes (lens, shooting info) is parsed and stored in imgdata.lens and imgdata.makernotes.
Again, many fields are vendor specific and you need to write some code to deal with this data.
I sent a project to your email. Please note that checkMetadataValue is written so, that if the value is absent then it should return "Missing", also note that even though it extracts altitude, after failing to extract longref it stops and does nothing with the rest of the file.
Also, I am concerned about other fields in GPS, for instance longitude returns 0s no matter what image I try.
Please upload the file somewhere (Dropbox, googledrive, etc) and
- either post a link here
- or send it directly to lexa@libraw.org
altref possible values are 0 or 1: http://www.sno.phy.queensu.ca/~phil/exiftool/TagNames/GPS.html
I can't find a way to attach a file, but here is a sample code that fails. I an extracting a lot of information so I am putting everything into a stream.
#include "libraw/libraw.h"
#include < iostream >
#include < sstream >
void getGPSinfo(string fileName)
{
LibRaw proc;
FILE *pf;
ostringstream stream;
proc.open_file(fileName);
libraw_gps_info_t GPSInfo = proc.imgdata.other.parsed_gps;
stream << "\"GPSAltitudeRef\"" << "->" << to_string(GPSInfo.altref);
pf = fopen("somePath\\debug.txt", "w");
fprintf(pf, stream.str().c_str());
fclose(pf);
proc.recycle();
}
Could you please share sample file for analyze?
imgdata.color.cam_xyz[] is exactly the same as Adobe DNG ColorMatrix2. It converts from XYZ to camera space.
I can reverse engineer Adobe code, but I still need to know what the cam_xyz matrix exactly converts to. It converts camera space to XYZ. But is that with respect to a reference white or not? If so, which? Hopefully this question makes sense.
cam_mul are in 'camera color space'.
For WB multipliers conversion to/from 'color temperature' please look into Adobe DNG SDK source (or into RawTherapee source), it it shorter way than translating Adobe code into english in this forum thread.
convert_to_rgb operates in linear space (from raw color space to linear output).
Final gamma conversion is done in dcraw_make_mem_image() or in dcraw_ppm_tiff_writer() calls
Good evening Alex. Would you kindly guide me a few steps forward with your suggestion? In dcraw I find the place where different color space is written. If I want to create something like linear to log, where would I change numbers? Where would I found these figures?
Thanks in advance
Thanks a lot for telling. I see what I can come up with here.
Thanks again.
/D
Yes, postprocessing code (after raw data read/unpack) is completely imported from dcraw without much improvement in functions.
Both 1D and 3D luts looks easy to implement assuming you've working code that replaces pixel value in place.
Assuming you're working in linear space in 'camera color', the best place to implement your code is convert_to_rgb_loop() function. It gets linear profile (out_cam[3][4]) that converts from camera space to output rgb. You may replace it with your code.
This function is already 'virtual', so it is very easy to implement any color conversion in derived class.
I see. It was worth a try :). For what it,s worth, the same limitations occur in dcraw so I assume the code is coming from there.
Sorry all questions but do you perhaps know a starting point being able to add 1D and/or 3D luts in dcraw/libraw?
Sorry, dcraw_emu is 'demo for library users' (so, for app developers), not for end-user.
It is very limited in functions.
I am curious if it is possible to add any other info to the output such as color space? A bit contradictory since an icc should contain colorspace information one would think?
Yes, I did. It works(I think) and applies the icc profile. Gamma is correct but colors are faded.
My output command goes like this
dcraw_emu -T -a -p profile.icc my.dng
My output when running dcraw_emu in terminal looks like this
Have you compiled dcraw/Libraw with lcms?
Ok, thanks for answers.
By the way. maybe you can answer. I,ve been exlporing the use of icc profiles in dcraw and also tried this in dcraw_emu. This works with -p [profile.icc] but I never get the colors right. It,s always washed out. Almost like dcraw is defaulting to raw color when using icc profiles? when trying the same profiles in photoshop or lightroom the colors are always right. Would be nice to have your point of view about this issue.
Thanks again.
/D
Unfortunately, no piping in dcraw_emu
Hi!
Just compiled and tried out some different stuff in bin folder. Checked into dcraw_emu and I am used to work with the piping command in dcraw.
-c Write image data to standard output
Is piping working somehow with libraw or what,s the reason it isn,t?
Thanks
/D
White balance and black level data can be retrieved after open_file (cam_mul, black and cblack[]).
Unfortunately, auto-brightening uses local variables, you need to modify copy_mem_image() code to save parameters somewhere if you use dcraw_make_mem_image() call.
Alex thanks for your support,
Turning of the the automation is giving me very bad images compared to the automated once. So, I would like to work with the automation but use the first images parameters for the rest of the set. Is there any way that i could retrieve the computed parameters(not the flags) by libraw and use them for the next RAW image? I don't want the parameters to be computed again and again for each raw file!!
Regards
To completely turn off any 'automation', you need to
- use same white balance coefficients (set params.user_mul to same values to make sure)
- use same black level (params.user_black)
- use same saturation level (params.user_sat) or do not use automated saturation calculation
(params.adjust_maximum_thr = 0)
- do not use auto_brigtening (as described above)
The images taken are a series of photos even if i avoid the auto parameters i still find a significant difference in the radiometry so i am thinking there are some parameters that use the content of the image. As you know in this case the nature of the image is considered which gives me very different images radio-metrically. To avoid this i need to understand which uses the image content to perform enhancement which doesn't. Could you provide me something on this please.
You may set imgdata.params.no_auto_bright to non-zero to avoid auto-brightening.
Different camera vendors use very different makernotes format, so there is no generic interface in LibRaw to get makernotes access.
Some of makernotes (lens, shooting info) is parsed and stored in imgdata.lens and imgdata.makernotes.
Again, many fields are vendor specific and you need to write some code to deal with this data.
Pages