ðòïåëôù 


  áòèé÷ 


Apache-Talk @lexa.ru 

Inet-Admins @info.east.ru 

Filmscanners @halftone.co.uk 

Security-alerts @yandex-team.ru 

nginx-ru @sysoev.ru 

  óôáôøé 


  ðåòóïîáìøîïå 


  ðòïçòáííù 



ðéûéôå
ðéóøíá












     áòèé÷ :: Filmscanners
Filmscanners mailing list archive (filmscanners@halftone.co.uk)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[filmscanners] Re: 8bits vs. 16bits/channel: can the eye seethe difference



on 3/16/03 6:15 PM, Karasev, Alexander at alexander.karasev@gs.com wrote:

> I think it is quite an assertion you are making there, Paul, that "the level
> of noise in a real-world image, either from film grain or CCD noise, is
> always greater than a least-significant-bit of an 8-bit value." There are
> situations when this is not the case. One perhaps is scanning a very
> fine-grained emulsion at a medium resolution; another, dealing with
> downsampled images (from medium or large format film?), and yet another,
> applying a smoothing algorithm to the sky and/or other areas that do not
> carry details or patterns. These are just a few off the top of my head.
>
> Besides, once it is established that the eye *can* see the difference
> between any two RGB24 levels, which this experiment does, the doubt as to
> the advantage of the greater than 8 bits color depth for input and output as
> well (not just processing) goes out the window. If your input is noisy, it
> could mask the limitations of 8 bits/channel representation - OR perhaps
> even lesser representations, depending on how bad the image noise is. But
> the better the images, the more obvious will be the proven-to-be-visible
> advantage of the 16 bits / channel representation.
>
> Alex
>
>> Date: Fri, 14 Mar 2003 17:16:19 -0800
>> From: "Paul D. DeRocco"
>> But the level of noise in a real-world image, either from film grain or CCD
>> noise, is always greater than a least-significant-bit of an 8-bit value.
>> This means that finer gradations are indeed represented in an 8-bit image
>> through dithering. Your test isn't a fair test.

Alex,

I think you missed the rest of Paul's statement.  As he says "finer
gradations are indeed represented in an 8-bit image through dithering".

If you scan a real image in 16-bit mode and there are more gradations
between say 128 and 129, even after converting to 8-bit mode there
will still be gradations between a pure 128 and a pure 129 patch.
Photoshop creates these extra gradations by dithering the transition
between 8-bit values.  In fact, by default, even the gradient tool
smoothes transitions from one 8-bit values to the next -- you don't
get 256 steps you get a smoothed gradient from end-to-end.

Roy

Roy Harrington
roy@harrington.com
Black & White Photography Gallery
http://www.harrington.com


----------------------------------------------------------------------------------------
Unsubscribe by mail to listserver@halftone.co.uk, with 'unsubscribe 
filmscanners'
or 'unsubscribe filmscanners_digest' (as appropriate) in the message title or 
body



 




Copyright © Lexa Software, 1996-2009.