Nick May pondered:
> (The real "hit" is that one has an extra trip through the file system
> each time an image is referenced, to see if it is in the cache. Any
> thoughts from anyone on the best Linux file system for a webserver
> serving heavily cached content? I know there is a limit to the number
> of files one can have in a directory - I have hit it (I *THINK*) with
> a heavily cached amazon shop site. Obviously, lots of subdirectories
> are a must... But that would not be an issue with even a thousand or
> so images. If anyone responds to this, please do so in a new
> thread...)
Have you experimented with the dir_index option for ext2/ext3
filesystems? It enables hashed b-trees to speed up lookups in large
directories. (This was also one of ReiserFS's claims to fame, but
you can get the same effect with a mainstream filesystem format.)
'man tune2fs' for the -O dir_index option, and 'e2fsck -D' for the
optimization of existing directories.
Mounting the filesystem with the 'noatime' option can also help a bit
when doing lots of reads.
Received on Wed Dec 14 04:14:13 2005