Re: [SLUG] Partition Opinions

From: Ian C. Blenke (icblenke@nks.net)
Date: Tue Sep 10 2002 - 09:58:49 EDT


There are a few different schools of thought.

/ (aka "root") is for the barebones files neccesary to boot your
machine, including folders like:

        /etc - Configuration files
        /dev - device files (sometimes a devfs mountpoint)
        /sbin - statically linked binaries (linux distros confuse this)
        /bin - critical dynamically linked binaries
        /lib - critical dynamic libraries

/usr is for binaries and other static files that change rarely (usually
during an upgrade process), and should generally remain untouched.

/var is for "persistent data" that tends to change often and should
exist between reboots.

/tmp is for "temporary data" that tends to change often and typically
should disappear between reboots.

/home is for "persistent user data" that changes often, and should exist
between system rebuilds.

For global data and binaries that must exist between system rebuilds,
some admins also keep separate partitions for these as well:

        /usr/local
        /opt
        /export

The old argument for putting everything in its own filesytem
appropriately sized and tuned for the use at hand was largely based on
backup rotations and system rebuild safety. It's easier to backup /var
and /home nightly and push the rest of the partitions off to a weekly or
(gasp) monthly rotation. The reasoning was that persistent data and
persistent user data will change more often than the system binaries
which should only change during an upgrade or other critical system
maintenance.

However, today, with the rapid pace of system exploits and auto-update
tools like up2date, urpmi, red-carpet, apt4rpm and apt-get, it's just
not this simple anymore. If you run the tool automagically, your /usr
and root filesystems change quite often, and your /var holds constantly
updated information about the state of installed packages. These package
databases, be it RPM, dpkg, pkgtool, or another scheme, must be
maintained carefully and backed up often alongside the files they
represent.

Personally, I've changed from a purist Unix mindset to a more flexible
arrangement: I like a root (/) filesytem large enough to hold /usr and
/var along side the rest of the distribution's files with a bit of room
for growth (yeah, I'm a 4G root type of person these days). This gives
me a bit of elbow room and the opportunity to do a system re-install if
need be. I like to separate my persistent data into a /local filesystem
(with symlinks from /var/log, /home, and wherever else appropriate) to
keep the critical files between system re-installs. Any local hand-built
packages go in /local/pkgs/{packagename} with persistent data in
/local/data/{packagename}. The NFS /remote filesystem is a shared
repository between all of the machines in the cluster. I use pkglink to
merge the packages into a symlink soup /u filesystem (/u/bin, etc).

What about backups? Rsync to a central backup server. Who uses tapes in
this day and age? IDE drives are ~$1/G now, and gig ether is down to
$60/port.

Now, back to the /tmp in /var, there are three schools of thought:

The first is that /tmp should be large enough to contain a moderately
sized temporary file but should not affect anything else on the system.
Admins with this mentality usually create a separate /tmp partition and
are happy to have a system that generally somewhat runs when the /tmp
filesystem fills. Honestly, I don't subscribe to this idea - when /tmp
fills, you're SOL anyway.

It was also once quite common for /tmp to actually be located in
/var/tmp. Admins with a bit of thought realized that a separate /tmp is
a waste of space, and that /tmp filling was just as bad as /var filling
- why not combine the two? The downsize to this, of course, is that a
runaway mail spool or other logging process can fill space needed by
files in /tmp and make your system potentially unusable. If you're
looking to size a unified /var with /tmp, I'd start at 1G.

Today, "tmpfs" filesystems commonly replace a separate /tmp filesystem.
Rather than waste harddrive space on a rarely used /tmp filesystem, why
not use some of the available system RAM and swap to pin it up in
memory? Not only do you regain a bit of space and remove a pesky
partition, you get the speed benefits of a RAM based filesystem - a
perfect fit for "temporary data", particularly /tmp files that
historically take large amounts of system IO.

The "tmpfs" solution is not without its drawbacks, however. I've dealt
with many multi-user systems where a user would download HUGE files to
/tmp to avoid quota issues only to kill a machine. With "tmpfs", a full
/tmp filesystem means NO SYSTEM RAM IS AVAILABLE - causing far more dire
consequences than a mere full /tmp filesystem normally would. Systems
with a full /tmp on tmpfs tend to swap themselves to death, get
processes randomly OOM (Out Of Memory) slain, or generally do Bad
Things.

I've converted more than my share of Solaris boxen from /tmpfs to
/var/tmp for this very reason. Is there anything wrong with it? Not
really, just keep an eye on /var to make sure it doesn't fill up.

- Ian C. Blenke <icblenke@nks.net> <ian@blenke.com>
http://ian.blenke.com



This archive was generated by hypermail 2.1.3 : Fri Aug 01 2014 - 19:10:40 EDT