First lets explain what “fsck” means in geek terms. The system utility fsck (for “file system check” or “file system consistency check”) is a tool for checking the consistency of a file system in unix and its clones. You can compare this action to that of the windows defrag tool on your windows machine. Of course what linux is actually doing is a bit more than rearranging blocks but its the best most common visual concept I know to reference so most users can understand. Some people use the term as clean play off of a common curse word which ironically enough is often spoken during an fsck check and is also a 4 letter word 😉 .
Ok so whats going on during an fsck check, why is needed, and why does it sometimes take sooooooo long? Back when ext2 filesystems were more common place linux would often force you into and extended check after a power outage or sudden reboot. Fsck’s could also take quite some time on an ext2 file system. Lucky for the linux community things improved drastically with the release of ext3 filesystem. The ext3 filesystem is a journaling extension to the standard ext2 filesystem on Linux which makes recovery times much faster on crashed systems.
When a computer powers off (or reboots) unexpectedly, data in memory is lost, and the operating system “forgets” what it was doing. This is the same reason you need to “shutdown” your home pc or the system can become unstable. There are thousands of files in use at any given moment. When a server suddenly loses power those files can become fragmented and/or corrupt and create bad inodes on the drive(s). The server will automatically mount the filesystem as read-only to prevent further damage when that happens. To bring the filesystem back up an extended fsck will be needed to scan every single ionode/file and repair any corruption. Extended fsck’s can only be performed while the filesystem is unmounted and can usually take hours depending on the level of corruption. If there’s a LOT of inodes with problems, it could easily take a while to fix. This can be exagerated on systems where large arrays are used such as most common vps (virtual private server) nodes or in our case every new machines we build. Fsck has to keep quite a bit of information in memory, so that it can perform consistency checks between different parts of the filesystem (e.g. orphaned files ). For a huge filesystem, this information could overflow physical RAM in some machines. Luckily we’ve packed ours full of 8gig ram per node and this has yet to be a problem. The time for doing an fsck will increase exponentially with number of files on the system. This unfortunately can still mean a real nasty wait for large arrays like our own if theres extensive corruption.
You can minimize data corruptions by using dual everything to keep the system live in event of failure of one peice, using ECC ram (required on opteron and xeon systems), and running on ext3 or another journaling filesystem. Also while raid 1 or raid 10 can save you with a disk failing, it can also copy corruption to its counterpart if things realy go awry. Sometimes it might be faster to restore the data from a backup than fsck-ing it but you really won’t know that until your neck deep into the fsck and start to see no light at the end of the tunnel !