On Fri, 20 Jul 2001 17:37:43 +0200 Alessandro Pardi
> Now, it's absolutely reasonable that you'll never see any
> difference with usual tweakings in photographic images, yet the fact
> Dan reserves himself the right to convert the 16 bits image to 8 and
> back to 16, although only in extreme cases, proves that *working*
> bits may be worthy, no matter how much information you start with.
I read it the other way, that there's little value in working 8-bit
16 bits, and I think there's a certain amount of sleight-of-hand in
reservation. Scans done in 16 bit have a distribution of pixel values
is vastly wider than 8 bit's 0-255. For the sake of argument say that
scan has pixels of every value 0-65535. Rounding errors in 16 bit will
correspondingly smaller throughout successive operations (=0.5/65535
iteration). A final reduction to 8-bit will thus minimise rounding
You cannot possibly end up with any accidental holes in the histogram
this route, as all errors will be <0.5/256.
However what Margulis proposes is different. Once you reduce bit depth
bit you homogenise those values so that your reconstituted 16-bit file
starts out with only 256 values per channel. You have discarded data
precision at the outset, though retained calculation precision for what
do subsequently. He appears to be arguing that working with these 256
values in 16bits doesn't work out to give any visible advantage, unless
extreme liberties are taken. I would not be even slightly surprised if
is correct, the data precision is long gone.
Still, all this is academic and makes assumptions about the 'purity' of
16bit data which may be incorrect in practice. Like Margulis, I'd agree
that empirical evidence matters more than theory. I know I have managed
produce posterised sky areas in 16 bit, even with modest manipulations.
Whether better or worse than if I'd used 8 bit I cannot say without
returning to the image and trying both.
http://www.halftone.co.uk - Online portfolio & exhibit; + film scanner