On Tue, 13 Dec 2005, Nick May wrote:
> (The real "hit" is that one has an extra trip through the file system
> each time an image is referenced, to see if it is in the cache. Any
> thoughts from anyone on the best Linux file system for a webserver
> serving heavily cached content?
If you're doing cut-and-cache, as you'd mentioned earlier, just get your
cutter to cache the images and you'll never go back to the FS.
> I know there is a limit to the number of files one can have in a
> directory - I have hit it (I *THINK*) with a heavily cached amazon
> shop site. Obviously, lots of subdirectories are a must...
Actually, I'm not sure about Linux, but under NetBSD the check to see if
a file has been updated will first go through the namei cache for the
conversion, which means that no directory search will be necessary if
it's found in that cache (which is typically several tens of thousands
of entries for the system one--but there's a per-process one, too). If
that hits, you might get lucky again and find the inode information in
the vnode cache, which is again not file-system-specific. Even if the
file has changed, it may well have its pages mapped into the buffer
cache already due to another process reading it (especially if you are
using a multi-child web server such as Apache in common configurations)
and you will still never hit filesystem-specific code.
Well, the one bit of fs-specific code you will almost invariably hit is
the inode update for last-access time; turning that off (the "noatime"
option) will save you a lot of writes if you have a lot of different
files being accessed.
cjs
--
Curt Sampson <cjs@cynic.net> +81 90 7737 2974
*** Contribute to the Keitai Developers' Wiki! ***
*** http://www.keitai-dev.net/ ***
Received on Wed Dec 14 08:08:37 2005