Could you please explain 12 vs 14 bit difference for Sony cRAW format:
1) Data stored in 8-bit per pixel overall (16-pixel blocks, 11-bit base value, 7-bit deltas)
2) After decompression: 11 bit non-linear data
3) After linearization curve applied: ??bit data with data range 0...~17000
File format is the same for 12-bit ADC cameras (e.g. A700), real 14-bit ADC (e.g. Sony A7r) and 14-bit ADC in 12-bit mode (A7R in electronic shutter mode).
OK, sorry for this dummy question indeed.
So my question is now how libraw know the constant factor to apply to normalize to 16bits ?
Sorry if it is a dummy question again.
Could you please explain 12
Could you please explain 12 vs 14 bit difference for Sony cRAW format:
1) Data stored in 8-bit per pixel overall (16-pixel blocks, 11-bit base value, 7-bit deltas)
2) After decompression: 11 bit non-linear data
3) After linearization curve applied: ??bit data with data range 0...~17000
File format is the same for 12-bit ADC cameras (e.g. A700), real 14-bit ADC (e.g. Sony A7r) and 14-bit ADC in 12-bit mode (A7R in electronic shutter mode).
-- Alex Tutubalin @LibRaw LLC
OK, sorry for this dummy
OK, sorry for this dummy question indeed.
So my question is now how libraw know the constant factor to apply to normalize to 16bits ?
Sorry if it is a dummy question again.
imgdata.color.maximum
imgdata.color.maximum variable holds maximum data range (this value is not black-level-subtracted)
-- Alex Tutubalin @LibRaw LLC
Thank you
Thanks a lot, that was what I was looking for.
Many thanks