Thank you Alex,
but is the coordinate with or without the border?
If it's with border, is the border always the same size left/right and top/bottom. So the "real" picture is always centred?
And where is the viewport? Is it at top/left or bottom/left or even somewhere else?
Thanks once again!
I do not think that fallback to older model is safe. Newer camera may have different sensor, or different CFA filters, or different calibration, or different raw adjustments in firmware. You could not know it without new raws analysis.
Finally: old DNG SDK 1.4 (dated May 2012) is *not* compatible with LibRaw. Internals of dng_info are very different in May 2012 and June 2015 versions, the older one does not have fChainedSubIFD array.
Implementing support for older version is just a big time waste: it will require lot of testing (already done for modern DNG SDK version) and no benefits.
So, you'll need to go second route (via patched modern DNG SDK 1.4), the DNG SDK provided w/ GPR SDK will not fit.
Hello Alex,
Thanks a lot for your job about LibRaw and FastRawViewer !
To solve my problem of using libraw librairy in Windows8.2 / Qt / C++ / MingW64, and according to your answer ( if i understand well )
i just done this :
> mingw32-make - f Makefile.mingw
This create the librairy : libraw.a with my mingw compiler 64 bits
libraw.a is 1183 ko in /lib folder.
In my minimal project !
i declare the librairy, and the Include/header and try to use it : As above in my first message
LibRaw *processor = new LibRaw ;
processor->open_file("C:/Test.cr2") ;
When i compile i still have errors like ; undefined reference to symbols...
I don't understand what is wrong with the linking.
1) GPR SDK includes Adobe DNG SDK v1.4 dated May 2012 (according to $datetime in comments).
In this version, dng_info's fIFD and fChainedIFD are AutoPtr arrays of fixed size.
2) In Adobe DNG SDK v1.4 dated June 2015, same fields are std::vector
This is very minor change, it should be easy to adopt, but I do not see any way to distinguish two SDKs at compile time.
It could be selectable by user w/ additional #define (like USE_OUTDATED_DNG_SDK :)
We'll try to solve this problem in beta update (need some time to set-up testbed with outdated DNG SDK included w/ GPR).
Current LibRaw (w/ ability to extract any DNG subframe via shot_select option) relies on DNG SDK 1.4 internals, it is not compatible w/ DNG SDK included into GPR SDK.
Looks like we need to update README.GoPro by removing this option:
I. GPR SDK comes with (patched) Adobe DNG SDK source. You may use this DNG SDK instead of
Adobe's one, or use standard Adobe's distribution.
I've found what I'm doing wrong. I always assume that resulting image is 3-channel when copying it into my internal RGB buffer for further processing, but this one is obviously 1-channel. Very stupid and obvious error, and I've found it in half of an hour after submitting my first post. Now I check value of rawImage->colors and for 1-channel images I copy value of first channel into the other two. I'll do some performance tests, maybe your solution with setting proper color space will be faster.
Interesting that digiKam gives similar triple-image result when processing my file. digiKam uses LibRaw for the processing of DNG files, and it's behaviour led my thoughts wrong way. Possibly I should report an issue into digiKam bugtracker.
Anyway, thanks again! The problem is solved. Should I somehow close this forum thread?
Thank you for the sample and for detailed explanation.
Tested with current LibRaw 0.20(beta):
dcraw_emu produces correct pgm file
dcraw_emu -T makes correct tiff file.
So, LibRaw::dcraw_process() and all previous steps are OK.
mem_image_sample.c fails to produce correct results, the problem not in LibRaw::dcraw_make_mem_image() (it is also correct, see below), but in sample source code:
1) write_ppm() do not handle img->colors != 3 case and just returns. This is expected, but need to be fixed.
2) write_jpeg() do not check for colors count, but assumes that 3-color data passed that is wrong.
Quick fixes:
A. replace
cinfo.in_color_space = JCS_RGB; /* colorspace of input image */
with
cinfo.in_color_space = img->colors==3?JCS_RGB:JCS_GRAYSCALE; /* colorspace of input image */
B. replace:
row_stride = img->width * 3; /* JSAMPLEs per row in image_buffer */
with:
row_stride = img->width * img->colors; /* JSAMPLEs per row in image_buffer */
Fixed version will be uploaded to github soon (likely tomorrow, we want to fix write_ppm sample code too).
The C-api call you proposed may fit your specific needs, sure. It is not enough if someone want to create generic raw processing tool, that handle non-bayer data (or even if row pitch is not raw_width*2).
Current C-API is limited for single (but most frequently requested) task: create rendered RGB image from RAW. In general, it is possible to create C-wrappers for each LibRaw class data field, but this is out of our goal and scope, sorry. LibRaw is C++ library, no plans to make it fully functional when called from C.
Yes - except that there are significant differences between it and MSVC that would probably lead to a lot of pain trying to even get it compile... I did try, but concluded the work involved would be too great, especially if I want to keep it up to date...
For example, its library does not have the _BitScanReverse function and there's nothing I can find on Google that shows someone has implemented it... I could have a go, but there will be other functions as well...
For this project, I need three things from the raw data: image height, image width, and the bitmap. There's already C calls for get_raw_height and get_raw_width -- all I need past that is the raw bitmap.
Black/white level, bayer pattern, color data, and white balance data are all needed if you need to reconstruct a full color original image-- and for that, yes, dcraw_process() is absolutely the right thing to call, because it manipulates the bitmap to reconstruct the image. This app don't need that level of processing. I also tried using raw2image, and even that appears to do some light debayering work and modifying the base bitmap.
This project needs the raw, unmodified bitmap straight off the camera sensor. It doesn't need any of the white balance data or color information, nor is it trying to recreate the full color picture. However, what's critical is the turnaround time between shutter snaps- in between shutter snaps, the app needs to extract the raw image data, review it for just a couple things, make decisions, and get back to the business of taking pictures as quickly as it can.
The first pass at implementing libraw invoked the unprocessed_raw utility (using the -T switch) to convert the bitstream to a TIFF, and it does exactly what was needed. However, it also adds the overhead of writing to/reading from the disk twice, as well as wrapping the bitmap in a TIFF wrapper.
Ideally the app should get the image from the camera, extract the things it needs from the raw data in memory, and pass everything around in memory buffers. Adding that one routine to the API would streamline the entire process, and make that capability available for anyone else as well.
Unfortunately C++ Builder's C++ name mangling is not compatible with MSVC's. There are nasty ways to get an MSVC class in a DLL to be accessible, but I took the simple way out and extended the LibRaw C API to include the LibRaw_abstract_datastream gets() function, which seems to be all I need.
Pointer to raw data itself is useless without access to metadata (black/white level, bayer pattern, color data, white balance data, image size, etc, etc, etc).
datap is pointer to your datablock (context), for example data table to be filled w/ exif callback.
It is passed to callback as 1st parameter (void *, you'll need to explicitly convert type before use).
Input datastream is passed as last parameter to callback as (void*). You'll need it (so, your callback should be C++ function in that case) if you plan to read tag values. If you want to only collect tags/tag types, you may ignore this last parameter.
Know nothing about CodeGear C++, so could not help with it.
Thanks for the quick reply!
imgdata.image is already cropped to visible area.
Thank you Alex,
but is the coordinate with or without the border?
If it's with border, is the border always the same size left/right and top/bottom. So the "real" picture is always centred?
And where is the viewport? Is it at top/left or bottom/left or even somewhere else?
Thanks once again!
coordinate is (y * width) + x (unless you use half_size option).
Values in imgdata.image array are 16 bit/linear space, IrfanView values are, most likely, 8bit/gamma corrected.
I do not think that fallback to older model is safe. Newer camera may have different sensor, or different CFA filters, or different calibration, or different raw adjustments in firmware. You could not know it without new raws analysis.
Alex,
I have removed any .dll or . a librairy before compiling.
Just i use libraw.a as soon as created
( I am probably a dummy with librairies )
In my project just i have this :
INCLUDEPATH += C:\LibRaw-0.19.5\libraw
LIBS += -LC:\LibRaw-0.19.5\lib -llibraw.a
and :
#include
I have still the trouble with static library. but '.exe' files in /bin work well !.
Lucien
Finally: old DNG SDK 1.4 (dated May 2012) is *not* compatible with LibRaw. Internals of dng_info are very different in May 2012 and June 2015 versions, the older one does not have fChainedSubIFD array.
Implementing support for older version is just a big time waste: it will require lot of testing (already done for modern DNG SDK version) and no benefits.
So, you'll need to go second route (via patched modern DNG SDK 1.4), the DNG SDK provided w/ GPR SDK will not fit.
Readmes are updated to reflect this.
make sure you're linking with (newly created) libraw.a, not (older) libraw.dll
Hello Alex,
Thanks a lot for your job about LibRaw and FastRawViewer !
To solve my problem of using libraw librairy in Windows8.2 / Qt / C++ / MingW64, and according to your answer ( if i understand well )
i just done this :
> mingw32-make - f Makefile.mingw
This create the librairy : libraw.a with my mingw compiler 64 bits
libraw.a is 1183 ko in /lib folder.
In my minimal project !
i declare the librairy, and the Include/header and try to use it : As above in my first message
LibRaw *processor = new LibRaw ;
processor->open_file("C:/Test.cr2") ;
When i compile i still have errors like ; undefined reference to symbols...
I don't understand what is wrong with the linking.
Thanks Alex,
Lucien
Btw, mingw-w64 binaries for libraw are already available through msys2 and easily installable with pacman: https://packages.msys2.org/base/mingw-w64-libraw
Followup:
1) GPR SDK includes Adobe DNG SDK v1.4 dated May 2012 (according to $datetime in comments).
In this version, dng_info's fIFD and fChainedIFD are AutoPtr arrays of fixed size.
2) In Adobe DNG SDK v1.4 dated June 2015, same fields are std::vector
This is very minor change, it should be easy to adopt, but I do not see any way to distinguish two SDKs at compile time.
It could be selectable by user w/ additional #define (like USE_OUTDATED_DNG_SDK :)
We'll try to solve this problem in beta update (need some time to set-up testbed with outdated DNG SDK included w/ GPR).
Right now we've updated DNG/GPR readme files only: https://github.com/LibRaw/LibRaw/commit/d86f980a5c7d30e7d553156861a108e1...
LibRaw dll coming with binary distributions is built using MS Visual Studio.
It is very likely that C++ name mangling (AND C++ library too!) is not compatible with gcc/MinGW.
You need to rebuild LibRaw using your compiler (Makefile.mingw should help)
Thank you for the feedback.
Current LibRaw (w/ ability to extract any DNG subframe via shot_select option) relies on DNG SDK 1.4 internals, it is not compatible w/ DNG SDK included into GPR SDK.
Looks like we need to update README.GoPro by removing this option:
I. GPR SDK comes with (patched) Adobe DNG SDK source. You may use this DNG SDK instead of
Adobe's one, or use standard Adobe's distribution.
We do not use mkdist/configure internally, but cloned (not commited) Makefile.devel
mem_image_sample problem fixed in this patch: https://github.com/LibRaw/LibRaw/commit/253f7ac76d03163497019302e3d1f967...
Exactly same problem present in our mem_image_sample: channel count assumed to be also 3 :)
We usually do not close threads (unless lot of spam come to specific thread).
Thanks for your fast reply.
I've found what I'm doing wrong. I always assume that resulting image is 3-channel when copying it into my internal RGB buffer for further processing, but this one is obviously 1-channel. Very stupid and obvious error, and I've found it in half of an hour after submitting my first post. Now I check value of
rawImage->colors
and for 1-channel images I copy value of first channel into the other two. I'll do some performance tests, maybe your solution with setting proper color space will be faster.Interesting that digiKam gives similar triple-image result when processing my file. digiKam uses LibRaw for the processing of DNG files, and it's behaviour led my thoughts wrong way. Possibly I should report an issue into digiKam bugtracker.
Anyway, thanks again! The problem is solved. Should I somehow close this forum thread?
Thank you for the sample and for detailed explanation.
Tested with current LibRaw 0.20(beta):
dcraw_emu produces correct pgm file
dcraw_emu -T makes correct tiff file.
So, LibRaw::dcraw_process() and all previous steps are OK.
mem_image_sample.c fails to produce correct results, the problem not in LibRaw::dcraw_make_mem_image() (it is also correct, see below), but in sample source code:
1) write_ppm() do not handle img->colors != 3 case and just returns. This is expected, but need to be fixed.
2) write_jpeg() do not check for colors count, but assumes that 3-color data passed that is wrong.
Quick fixes:
A. replace
cinfo.in_color_space = JCS_RGB; /* colorspace of input image */
with
cinfo.in_color_space = img->colors==3?JCS_RGB:JCS_GRAYSCALE; /* colorspace of input image */
B. replace:
row_stride = img->width * 3; /* JSAMPLEs per row in image_buffer */
with:
row_stride = img->width * img->colors; /* JSAMPLEs per row in image_buffer */
Fixed version will be uploaded to github soon (likely tomorrow, we want to fix write_ppm sample code too).
Here are processing results (pgm file from dcraw_emu and .jpg from fixed mem_image_sample): https://www.dropbox.com/sh/a6aksx5opyzeeyv/AACG3f9GCZ703UksKjc5H5Rta?dl=0
The C-api call you proposed may fit your specific needs, sure. It is not enough if someone want to create generic raw processing tool, that handle non-bayer data (or even if row pitch is not raw_width*2).
Current C-API is limited for single (but most frequently requested) task: create rendered RGB image from RAW. In general, it is possible to create C-wrappers for each LibRaw class data field, but this is out of our goal and scope, sorry. LibRaw is C++ library, no plans to make it fully functional when called from C.
Yes - except that there are significant differences between it and MSVC that would probably lead to a lot of pain trying to even get it compile... I did try, but concluded the work involved would be too great, especially if I want to keep it up to date...
For example, its library does not have the _BitScanReverse function and there's nothing I can find on Google that shows someone has implemented it... I could have a go, but there will be other functions as well...
Thanks for the quick response!
For this project, I need three things from the raw data: image height, image width, and the bitmap. There's already C calls for get_raw_height and get_raw_width -- all I need past that is the raw bitmap.
Black/white level, bayer pattern, color data, and white balance data are all needed if you need to reconstruct a full color original image-- and for that, yes, dcraw_process() is absolutely the right thing to call, because it manipulates the bitmap to reconstruct the image. This app don't need that level of processing. I also tried using raw2image, and even that appears to do some light debayering work and modifying the base bitmap.
This project needs the raw, unmodified bitmap straight off the camera sensor. It doesn't need any of the white balance data or color information, nor is it trying to recreate the full color picture. However, what's critical is the turnaround time between shutter snaps- in between shutter snaps, the app needs to extract the raw image data, review it for just a couple things, make decisions, and get back to the business of taking pictures as quickly as it can.
The first pass at implementing libraw invoked the unprocessed_raw utility (using the -T switch) to convert the bitstream to a TIFF, and it does exactly what was needed. However, it also adds the overhead of writing to/reading from the disk twice, as well as wrapping the bitmap in a TIFF wrapper.
Ideally the app should get the image from the camera, extract the things it needs from the raw data in memory, and pass everything around in memory buffers. Adding that one routine to the API would streamline the entire process, and make that capability available for anyone else as well.
Wouldn't it be easier to recompile LibRaw using your compiler?
Thanks Alex
Unfortunately C++ Builder's C++ name mangling is not compatible with MSVC's. There are nasty ways to get an MSVC class in a DLL to be accessible, but I took the simple way out and extended the LibRaw C API to include the LibRaw_abstract_datastream gets() function, which seems to be all I need.
Andy
Pointer to raw data itself is useless without access to metadata (black/white level, bayer pattern, color data, white balance data, image size, etc, etc, etc).
So you'll need to access all internals anyway.
datap is pointer to your datablock (context), for example data table to be filled w/ exif callback.
It is passed to callback as 1st parameter (void *, you'll need to explicitly convert type before use).
Input datastream is passed as last parameter to callback as (void*). You'll need it (so, your callback should be C++ function in that case) if you plan to read tag values. If you want to only collect tags/tag types, you may ignore this last parameter.
Know nothing about CodeGear C++, so could not help with it.
Pages