ðòïåëôù 


  áòèé÷ 


Apache-Talk @lexa.ru 

Inet-Admins @info.east.ru 

Filmscanners @halftone.co.uk 

Security-alerts @yandex-team.ru 

nginx-ru @sysoev.ru 

  óôáôøé 


  ðåòóïîáìøîïå 


  ðòïçòáííù 



ðéûéôå
ðéóøíá












     áòèé÷ :: Filmscanners
Filmscanners mailing list archive (filmscanners@halftone.co.uk)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[filmscanners] Re: Moire - Scanners vs Digicams



Hi Al,

I believe both Austin and David have provided you with information on
the specifics of why moiré is more likely to occur on a digicam image
than a scanned film image.

I do not know how much of the technical jargon provided was
comprehensible, so I will try to simplify the message, although my
explanation may take more words to accomplish.

Both digital cameras and film scanners are monochromatic (they really
record only gray scale or luminescence information).  However, as we all
know, they seem to reproduce color images.  How is that possible?  Both
use color separation filters to accomplish that feat, but the way they
do so is quite different.

As you probably know, the way color separation is accomplished is by
using color filters over the sensor and then have that sensor
acknowledged in software or firmware to be recording luma information
for that color only.  By measuring the amount of light which falls upon
the sensor after filtered through a red, a green or a blue filter, when
those three data values are brought together, the color of the light
that was projected on the sensor can be determined.

So, if a sensor has a red filter over it, is can measure the amount of
red light in the light source projected upon it (through the filter) and
so one with green and blue.

With a film scanner (or a flat bed for that matter) one of two methods
are likely to be used.  With the vast majority of scanners there are
three sets of CCD sensor "strips".  Each strip is covered with one of
the three filter colors.  As the slide or neg (or the reflected image
color) is projected on the CCD sensor strip through the filter above it,
each sensor point records the value of the light of the filter color
hitting the sensor.  Each point for which a sensor exists (such as 2820
per inch) is "read" by each of the three sensor strips, and the three
data values are brought together so that each sensor point is expressed
as a value for RED, GREEN and BLUE light. Thus each sensor point is
given a value to indicate a specific color value. So, in this case, the
film color values are recorded 2820 times per inch in both dimensions.
That's a lot of sampling.

The second method is similar, and used by Nikon film scanners.  It uses
one CCD strip with no filter over it, but the color of the light
projected through the film changes three times, from RED to GREEN to
BLUE, and the CCD sensor for each point takes a reading for each color
change.  You get the same amount of samples and information.

Now, let's look at how most Bayer patterned cameras.  Again, remember
that the CCD sensor is still only reading luminescence values, not
color.  In order to allow the camera to record color, we again have to
make color separations.  In a perfect world, each data point would get 3
readings: one for Red, one for Blue and one for Green, using filters in
front of the sensors to make the color separating information.  But
since digital cameras have to operate much faster, they need a whole
chip of sensors to capture the whole image in one capture, unlike
scanner which use just a strip of sensors one sensor wide which are
moved (or the film is moved) until they cover the whole image (as you
know this can take several minutes to accomplish).   To do this with a
digital camera would require 3 CCD sensor chips each with a color
separating filter in front of it (R, G and B) with a beam splitter to
distribute the light to each of the three CCDs.  It would make the
cameras much larger and more costly, and might even slow it down quite a
bit, since the light hitting each CCD is cut in third or more.

So, instead the Bayer pattern is used.  This is a filter that has blocks
of color (R, G or B) the exact size of each sensor in the CCD.

This pattern is designed as follows:

RGRGRGRGRGRGR
GBGBGBGBGBGBG
RGRGRGRGRGRGR
GBGBGBGBGBGBG

In this case, R=Red, G=Green and B=Blue.

As you can readily see, a couple of problems exist.

One, there are twice as many of sensors with a Green color separation
filter in front of it than either Red or Blue.

Two, any one sensor location only gets information about one of the colors.

The advantage to this system is it requires only one CCD chip, the image
can be captured rapidly with no mechanical or other alterations or
filters.  The disadvantage is that each sensor point only gets one piece
of the color image data.  So, the R(ed) covered sensor has to guess what
the Green and Blue values were, the Blue has no idea what the Red and
Green values were at that exact spot, and the same holds true to the
Green, or which half the total sensors numbers are covered with.

This requires the image to be processed with algorithms which look at
surrounding data to make reasonable color guesses (interpolation) about
those missing data points, and with sophisticated soft or firmware, a
reasonable facsimile can be created.

Where this system tends to be weak and create artifacts, is when there
are very many contrast or color changes with sharp delineations.  It is
in those areas that mores of other artifacts can be built into the image
file because, simply put, the algorithms simply amplify these gaps of
"color knowledge".  This information can particularly show up with a
repeated pattern of certain frequency relative to the spacing/size of
the sensors.  An example might be a close up photograph of a cloth with
a weave where every other thread alternated color.

The only type of sensor chip I know of in a digital camera where this
type of moiré artifacting is unlikely to occur (without using some sort
of cut off or aliasing filtering to diminish it but also soften the
image) is one with a Foveon chip which takes a R, G and B value from
each sensor point using a unique patented method which uses transparent
layered sensors.

Art



Al Bond wrote:
> Hi folks,
>
> I was looking at some 5 megapixel (Canon G5) sample images to get
> some idea how the might compare against my 2820 dpi scanner.  I know
> I have read in the past that, despite the large difference in file sizes and
> pixel counts, a 5MP camera isn't as far behind a 2820 dpi scanner as it
> might seem.
>
> Certainly the sample G5 images looked reasonable but, on one image of
> a house, the fine roof tile details had generated a fairly obvious moire
> pattern.  This is something I have never come across on any of my film
> scans.
>
> Have I just got lucky on my scans or is moire more likely to happen with
> digicams?  Also, does the processing affect this? The G5 image in
> question was available on the Web as both a camera produced Jpeg and
> as a RAW file: when I processed the RAW file in Vuescan to produce a
> 16 bit TIFF and applied edge sharpening in Photoshop, the moire was
> more noticable than in the Jpeg - but there was also much more fine
> detail than in the Jpeg.
>
> I'm just curious to know why there seems to be this apparent difference
> between digicams and scanners!
>
>
>
> Al Bond
>
>

----------------------------------------------------------------------------------------
Unsubscribe by mail to listserver@halftone.co.uk, with 'unsubscribe 
filmscanners'
or 'unsubscribe filmscanners_digest' (as appropriate) in the message title or 
body



 




Copyright © Lexa Software, 1996-2009.