Apache-Talk @lexa.ru 

Inet-Admins @info.east.ru 

Filmscanners @halftone.co.uk 

Security-alerts @yandex-team.ru 

nginx-ru @sysoev.ru 




      :: Filmscanners
Filmscanners mailing list archive (filmscanners@halftone.co.uk)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: filmscanners: Best solution for HD and images

----- Original Message -----
From: "Austin Franklin" <darkroom@ix.netcom.com>
To: <filmscanners@halftone.co.uk>
Sent: Tuesday, November 13, 2001 2:52 PM
Subject: RE: filmscanners: Best solution for HD and images


You should know that not only do striped disks reduce reliability and hence
increase risk but they also increase severity.
i.e. any one drive of a multiple striped drive set failing WILL lose ALL of
your data.

You should only use this arrangement where you keep very regular backups,
you use it largely as a scratch area or you can relatively easily recreate
the data.

If you don't believe me read on, but I will end this here and will not be
replying to Austin's inevitable reply

FOR AUSTIN (and any interested parties)
>What I do
> know, I know, and what I don't know, I know I don't know.

So why are you quoting statistical data when you don't understand basic
statistical analysis.

> > MTBF of a RAID-0 system (or dual cpu/memory where one unit CAN
> > NOT continue
> > without the other) will always be lower than a single drive unless the
> > standard deviation (they never quote SD) of the MTBF is zero.
> Well, if you take duty-cycle into account, which MTBF calculations do, you
> will actually get higher MTBF for RAID 0, simply because the main failure
> the servo actuator, and when it is only being used for half the
> will increase.
The servo actuator will be used less but not anywhere near half the time the
whole point of striping is that you use both drives at once. Any marginal
increase in MTBF rate of each single drive will not save you from the early
device failure rate which as perm any one from N as any decent turf
accountant will tell you shortens the odds.

> > The reality for MTBF of a RAID-0 will lie in between.
> But that means it doesn't change compared to a single drive...
But each drive is dependant on the other so the reliability of the system is
compromised by either drive failing.

> > Cummalative failure rate is a much more useful figure for us and
> > for a small
> > number of fairly reliable inter-dependant devices this is nearly
> > an additive
> > figure - but not quite.
> That I completely disagree with.  It is absolutely NOT additive.  In fact,
> as I pointed out above, you may get HIGHER reliability by using RAID 0
> simply because of duty cycle and the common failure mode, both of which
> a very important part of MTBF.
> > Seagate reckon about 3.41% (flat-line model) will fail during the first
> > years of use (assuming you only use it for 2400 hours a year [6
> > 1/2 hours a
> > day]) :
> >
> > http://www.seagate.com/docs/pdf/newsinfo/disc/drive_reliability.pdf
> If you read that article you referenced, when they talk about multiple
> disks, they are talking about multiple PLATTERS in a single disk, not
> drives, so you can't derive the numbers you did for multiple drives from
> that article.  No where in that article did they discuss multiple drives.
I suggest you go back to the link and look at pages 6 & 7 - the figures I
used were for drives not platters. Whilst they don't discuss multiple drives
simple statistical analysis will give you the answer. I suggest you go get a
school book and have a look.

> > And there is enough misinformation
> > being thrown around here that it is just confusing everyone.
> You're right, even you are doing it!  There is also a LOT of
> available on the web.  The basics are typically right, but the in-depth
> understanding is usually lacking.

The problem with the web is that anybody who thinks they understand jumps up
and tells the world. Soon everybody beleives it.

> > ...such as the fact
> > that RAID 0 is indeed less reliable than a single drive
> It's NOT a fact.  It's speculation.  Have you any test data to back that
> claim up?  I'd be willing to bet that is an incorrect claim, based on the
> reason I stated in previous posts.
I had a quick look for a reliable source and quickly noticed the weasel
words on the disk manufacturers site which generally say this about raid-0
"great for speed but if you have a problem you lose ALL of your data on all
of the disks". So they admit the severity but skip over the risk factor. I
don't suppose we should be too surprised as they are after all trying to
flog you the disks!

So I'll leave it to an authority on RAID who don't have their interests in
disk in manufacturing.


"The Mean Time Between Failure (MTBF) of the array is equal to the MTBF of
an individual drive, divided by the number of drives in the array" - it's
not quite true but as a rough simple approximation it is all but correct.



Copyright © Lexa Software, 1996-2009.