31

Recently, I've been compelled to restart my computer a lot. When I boot, Ubuntu now starts scanning my hard drives for errors, but assures me that I can cancel if I want by pressing 'c'.

Why does Ubuntu do this? If it's necessary, why is it something I can cancel? If it's not necessary, why force me to do it? On what basis is the number of restarts decided?

jml
  • 1,005

4 Answers4

34

A disc check is forced by the system every 30 restarts. If you skip the disc check it will do it the next time you restart (unless you manually remove forcecheck).

You can force this yourself by putting a file forcefsck in / by issuing a

touch /forcecheck

from the terminal.

It is not necessary to always do a check when it's prompted but should be done every now and then. You can cancel it if it is not the right moment for you at that time and let it do the file system check when it is more convenient.

You can also use tune2fs to alter this behaviour.

sudo tune2fs -c 60 /dev/sdXY

will set this to 60 restarts. You can also change this to a time period with -i:

sudo tune2fs -i 30d /dev/sdXY

for 30 days or 1m for 1 month or 10w for 10 weeks.

(replace /dev/sdXY by the device name for the partition like /dev/sda1. You can get this name by running sudo blkid or ls -lA /dev/disk/by-label if the partition is labelled)

sudo dumpe2fs /dev/sda1

will show loads and loads of information. Part of this includes:

Filesystem created:       Thu Feb 12 09:06:50 2009
Last mount time:          Fri Aug 26 07:19:34 2011
Last write time:          Fri Aug 26 07:19:34 2011
Mount count:              2
Maximum mount count:      25
Last checked:             Fri Aug 12 07:22:16 2011
Check interval:           15552000 (6 months)
Next check after:         Wed Feb  8 06:22:16 2012
Lincity
  • 25,371
Rinzwind
  • 299,756
  • thanks @Lekensteyn (did it from memory and my memory seems to be bad sometimes ;) ) – Rinzwind Sep 06 '11 at 17:43
  • 3
    To be a nit-picker here: I think it's not true that "check is forced by the system by putting a file forcefsck in /" (usually not, fsck checks if the filesystem is "dirty" or past the "max-count"/"next check") and that "this is done every 30 restarts" (it varies, depending on the program you use to format the partition with). For both check the output of dumpe2fs. http://www.ubuntugeek.com/how-to-get-information-about-your-file-system-in-ubuntu.html – arrange Sep 06 '11 at 18:48
  • 1
    That's not nitpicking arrange :-) That's being accurate. Changed it. – Rinzwind Sep 07 '11 at 08:33
  • This applies only to the ext* family of filesystems, the default in Ubuntu. XFS or JFS don't routinely do filesystem checks – Jan Sep 11 '11 at 19:47
  • 3
    Should the two times you mention forcecheck be forcefsck instead? (ref: http://askubuntu.com/questions/14740/force-fsck-ext4-on-reboot-but-really-forceful) – idbrii Jul 07 '12 at 18:45
5

These are routine file system checks, initiated every 30 reboots. The option to cancel it is there so that you are not detained from something critically important, however, it's recommended to let it run once in a while. I don't know on what basis the number of reboots was set, presumably, common sense. If it's too annoying, you can increase the number of reboots without checking partitions using the 'tune2fs' command.

Lekensteyn
  • 174,277
mikewhatever
  • 32,638
1

It is possible to completely disable the file system check on ext-filesystems using:

sudo tune2fs -c 0 /dev/sdXY

This might not be a good idea thou. The tune2fs manpage notes:

You should strongly consider the consequences of disabling mount-count-dependent checking entirely. Bad disk drives, cables, memory, and kernel bugs could all corrupt a filesystem without marking the filesystem dirty or in error. If you are using journaling on your filesystem, your filesystem will never be marked dirty, so it will not normally be checked. A filesystem error detected by the kernel will still force an fsck on the next reboot, but it may already be too late to prevent data loss at that point.

Thomas
  • 1,656
0

While mikewhatever and Rinzwind are right for ext-Filesystems, afaik, it doesn't happen if you choose to use reiserfs. I'm using it for 10 years without problems and can recommend it. No fsck any more.

I don't know for the other filesystems, popular on Linux.

user unknown
  • 6,507
  • 4
    reiserfsck actually has a tendency to break filesystems beyond repair. If a reiserfs ever develops a problem, that is game over. – Simon Richter Sep 06 '11 at 17:10
  • 1
    Did it happen to you, or do you have references? – user unknown Sep 06 '11 at 18:42
  • The fact that ReiserFS doesn't perform these checks has no bearing on its reliability. Ext3 is rock-solid, but the checks only take a couple seconds, so why not? – Brendan Long Sep 06 '11 at 19:24
  • I know several people who have lost data, and from the design of the filesystem it is quite obvious why: it has no designated metadata blocks. While this is an advantage in a number of cases (if ext runs out of metadata blocks, you cannot create more files even though you still have free data blocks), it creates ambiguity when salvaging a filesystem that somehow (hardware fault, bug on lower layers) has gone inconsistent. – Simon Richter Sep 07 '11 at 01:08
  • I'm still using reiserfs format "3.6", according to dmesg. Might that make a difference? – user unknown Sep 07 '11 at 02:05