Re: [SLUG] files vs filesystem question

From: Derek Glidden (dglidden@illusionary.com)
Date: Fri Apr 19 2002 - 14:50:07 EDT


On Fri, 2002-04-19 at 13:28, Mikes work account wrote:
>
> I have a couple of very large files ( in excess of 1 GB each) and I was
> wondering that if in a database that uses B-Tree indexing is it advantageous
> to put each of the large files in its own filesystem and would it help if
> that filesystem was XFS which also uses B-tree indexing??

Not really. The structure of the file and the structure of the
filesystem have little relation to each other.

XFS' use of B-trees is in inode and directory structures. It makes it
much faster to lookup and find where a particular file is stored on the
filesystem. The db file's use of B-trees internally is so that whatever
application is accessing it can ask the filesystem for the appropriate
blocks inside the file without having to scan the entire file. Once the
filesystem knows where the file is, and the application has opened it,
it's just a matter of asking the filesystem for the particular blocks
inside the file, which is pretty straightforward work mostly handled by
Linux' block interface.

The advantage of having XFS for very large files is that it's a 64-bit
filesystem, which makes it able to support files >2GB easily, and is
designed for speedy operations on large files. Plus the journaling
features help speed up writes on files that are frequently accessed.

Separate partitions should not have much effect on performance, since
XFS is designed for large partitions and represents its filesystems as a
series of "extents" which act to kind of partition up large filesystems
anyway. Plus the more space you give to a partition, the more room
there is for the journal to grow before it must be flushed which will
also help performance.

FYI - the official "1.1 Release" version of the XFS patches were
recently released by SGI against the 2.4.18 kernel. I've been using it
and it's better than ever. Mostly little performance and stability
improvements and a number of bug fixes, but they also finally fixed the
long-outstanding "unlink operations are always synchronous" issue so
that deleting large numbers of files is MUCH quicker than with older
releases. I think they've got the RedHat 7.2 XFS installer ready, or
almost ready, also, so you can just d/l their installer and build a
RedHat system with all XFS filesystems.

-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
#!/usr/bin/perl -w
$_='while(read+STDIN,$_,2048){$a=29;$b=73;$c=142;$t=255;@t=map
{$_%16or$t^=$c^=($m=(11,10,116,100,11,122,20,100)[$_/16%8])&110;
$t^=(72,@z=(64,72,$a^=12*($_%16-2?0:$m&17)),$b^=$_%64?12:0,@z)
[$_%8]}(16..271);if((@a=unx"C*",$_)[20]&48){$h=5;$_=unxb24,join
"",@b=map{xB8,unxb8,chr($_^$a[--$h+84])}@ARGV;s/...$/1$&/;$d=
unxV,xb25,$_;$e=256|(ord$b[4])<<9|ord$b[3];$d=$d>>8^($f=$t&($d
>>12^$d>>4^$d^$d/8))<<17,$e=$e>>8^($t&($g=($q=$e>>14&7^$e)^$q*
8^$q<<6))<<9,$_=$t[$_]^(($h>>=8)+=$f+(~$g&$t))for@a[128..$#a]}
print+x"C*",@a}';s/x/pack+/g;eval 

usage: qrpff 153 2 8 105 225 < /mnt/dvd/VOB_FILENAME \ | extract_mpeg2 | mpeg2dec -

http://www.cs.cmu.edu/~dst/DeCSS/Gallery/ http://www.eff.org/ http://www.anti-dmca.org/



This archive was generated by hypermail 2.1.3 : Fri Aug 01 2014 - 20:18:29 EDT