Filmscanners mailing list archive (email@example.com)
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[filmscanners] RE: 8bits vs. 16bits/channel: can theeyeseethe difference
> You seem to be totally missing the point. Images are MANY pixels not just
> one pixel.
I very well understand that.
> You can't look at a single pixel and decide the properties of
> the whole image.
No, but each pixel is only related to it self, it has no bearing on any
other part of the image.
> You may recall this whole thread started with
> Alex saying
> a patch of pure 127 is distinguishable from a patch of pure 128, therefore
> concluding that 8-bits is not enough. As Paul remarked and I concurred
> there are lots of potential grays representable in 8-bit between the two
> pure patches.
But where do these "pure" patches exist? Not in the real world! Of course
there are values that would exist between, in the real world. I believe you
are confusing two worlds (references) here.
> 127.5 is easily represented by half the pixels = 127 and
> the other half = 128.
But that's not what happens in the real world. Some "sensors" might pick up
the 127.5 as being all 128, and others may pick it up as only being
127...and you'll get some distribution throughout.
> Doing this you can represent whatever number of
> grays you like.
If you are outputting, yes, I agree, that is called halftoning but not if
you are inputting. Here is the original quote I responded to:
"But the level of noise in a real-world image, either from film grain or CCD
noise, is always greater than a least-significant-bit of an 8-bit value.
This means that finer gradations are indeed represented in an 8-bit image
Clearly talking about real world images, not images made up in PS.
> The idea is that you are gaining tonal accuracy at the
> expense of spacial resolution, but we're talking about very gradual
> gradients where resolution is not important.
> The concept is always called DITHERING in the imaging world.
It depends on what you are talking about, and what was being talked about
would be considered , though I called it aliasing, it's technically
quantization error, not dithering. If you are talking about outputting,
then yes, it can possibly be dithering. I believe you are confusing input
and output processes, and my understanding was the discussion was about, and
working with data that was taken from a scan. And I quote:
"If you scan a real image in 16-bit mode and there are more gradations
between say 128 and 129, even after converting to 8-bit mode there
will still be gradations between a pure 128 and a pure 129 patch."
And those "gradations" you believe exist don't, they are merely an artifact
of quantization error.
I believe you are confusing input and output
> You can
> quote DSP definitions, but unless you apply them as is standard in the
> imaging industry you are not going to understand and be understood.
But I AM in the imaging industry. I design digital imaging equipment for a
living, and have been doing so for 25 years. No one I deal with uses the
terms you are using to mean what you believe they mean.
> Your DSP definitions can apply quite easily -- the dithering is simply
> adding that 1/2 LSB noise to the data and then truncate to 8bit. So:
> 127 goes to 127 +/- .5 goes to 127
> 127.5 goes to 127.5 +/- .5 goes to half 127, half 128
> 128 goes to 128 +/- .5 goes to 128
But that's not dithering, it's quantization error.
> This is what Photoshop does in converting from 16 to 8 bit.
No it doesn't. It just chops off the lower 8 bits. There is no
quantization error or dithering involved in converting 16 bits to 8 bits.
Unsubscribe by mail to firstname.lastname@example.org, with 'unsubscribe
or 'unsubscribe filmscanners_digest' (as appropriate) in the message title or