[unisog] large volume of files per filesystems

lbuchana at csc.com lbuchana at csc.com
Wed Dec 26 14:16:35 GMT 2001


In the responses so far, I have not noticed any mention of the issue of the
tape drive being a bottle neck.  If you can not feed data to the tape drive
to keep it streaming, you will have horrible performance.  Any interruption
in the data stream causes the tape drive to stop, rewind, and wait for the
next tape block.  There is at least one tape drive on the market, that has
a variable write speed to reduce or eliminate this problem, but I have no
idea of how well it (they) work as I have never seen one.

One method that I have used to reduce the number of times a tape drive has
to rewind during a backup is to use very large tape blocks.  How well this
works with modern hardware compression board is something I have never

Another issue to consider is reworking the application to reduce the number
of files.  At a user group meeting several years ago, a sys admin described
an application that was dealing with small gene fragments, and the user was
putting each fragment into a separate file.  The thrashing of opening and
closing thousands of files was killing system performance.  The sys admin
rewrote the users application to only use two or three files.  The
application ran on the order of a thousand times faster and did not
interfere with other users of the system.

My real point, is you need to look at the entire system.

B Cing U


More information about the unisog mailing list