New LibRaw-contrib repository is added to LibRaw's GitHub account.
This repository is for code, contributed by LibRaw users, but not included into main LibRaw source tree.
For now only one project is present in the repo:
raw2tiff
This program converts a raw image-(such as canon's cr2 or nikon's nef) to
a tiff image. It accomplishes this using the libraw library available at
www.libraw.org. It emulates the dcraw -D -4 -T command. It has only been
tested using canon CR2 files.
Once in a while one may want to adjust L channel viewing it separately. The rub is that to do this without using extra layers one needs either to use grey Lstar profile as grey working space in Photoshop Color Settings (Cmd/Ctrl-K), or to switch on Show Channels in Color in Photoshop Interface Preferences (Cmd/Ctrl-K). Otherwise the brightness and contrast of the L channel display are wrong.
Here are some screen shots to illustrate why one might care.
For quite some time we were suggesting that floating point implementation of demosaicking algorithms allows for higher quality results. Incidentally, some programmers who vigorously argued for years insisting integer processing is quite sufficient are now starting to code their demosaicking in floating point too. Here is a comparison of the results of original AHD demosaicking algorithm implemented using floating point and integer arithmetics.
With the existing diversity of RAW converters and their algorithms, there is the problem of choice: which converters are better (and for which purposes). An evident methodology is often encountered in internet forums: take one or several images, process them using different converters/algorithms/settings and compare them visually. The result often looks like this: image P should better be processed using algorithm Q, and image A is better handled by algorithm Z with option f+.
Moreover, it is simply wrong to analyze things in terms worse or better . The correct formulation is closer to/farther from the initial image .
The problem is that here we deal with a complex system, which includes
The photographed object and light.
The light path in the camera with lens aberrations and light scattering within the camera.
The sensor with all construction features: anti-alias filter, color bayer filters, microlenses, etc.
The copy of LibRaw internal SVN repository has been created on GitHub. All changes made to the master branch through Git will be incorporated into the main Subversion repo.
So, if you wish to participate in LibRaw development you may get full sources from GitHub, add your changes, commit, and send us a request to merge your changes into the main source tree; all this using just standard GitHub tools. Also you can report a bug or make a feature request using GitHub interface.
The current methods used to determine the sensitivity of digital cameras are not based on the RAW data coming from the sensor; rather they are based on the results of processing the RAW in a converter (be it an external converter or in-camera).
This approach, with all its simplicity, is in fact based on the properties of the RAW converter and on the transformations it applies to RAW data. In particular, the converter can introduce hidden exposure compensation, change the tone curve, and so on. The sensitivity of the camera resulting from such a procedure is a pretty arbitrary value. The matter is discussed in good detail in Wikipedia, in the article explaining ISO 12232 standard.
This approach allows the camera manufacturers to enjoy all sorts of tricks while stating the sensitivity, say cameras from different manufacturers but having the same rated sensitivity behave wildly different when it comes to photographic latitude. This means that switching between different camera bodies one often needs to re-adapt, changing the way he applies exposure compensation.
A simple experiment that takes no additional equipment other than already existing camera and lens allows to accurately determine how the camera exposes, that is:
which level of signal (in terms of RAW data) is obtained while setting the exposure by the exposure meter;
what headroom in highlights is left, i.e. how many exposure stops are between the middle gray and sensor saturation.
Recent comments