72

(This question deals with a similar issue, but it talks about a rotated log file.)

Today I got a system message regarding very low /var space.

As usual I executed the commands in the line of sudo apt-get clean which improved the scenario only slightly. Then I deleted the rotated log files which again provided very little improvement.

Upon examination I find that some log files in the /var/log has grown up to be very huge ones. To be specific, ls -lSh /var/log gives,

total 28G
-rw-r----- 1 syslog            adm      14G Aug 23 21:56 kern.log
-rw-r----- 1 syslog            adm      14G Aug 23 21:56 syslog
-rw-rw-r-- 1 root              utmp    390K Aug 23 21:47 wtmp
-rw-r--r-- 1 root              root    287K Aug 23 21:42 dpkg.log
-rw-rw-r-- 1 root              utmp    287K Aug 23 20:43 lastlog

As we can see, the first two are the offending ones. I am mildly surprised why such large files have not been rotated.

So, what should I do? Simply delete these files and then reboot? Or go for some more prudent steps?

I am using Ubuntu 14.04.

UPDATE 1

To begin with, the system is only several months old. I had to install the system from scratch couple of months back after a hard disk crash.

Now, as advised in this answer, I first checked the offending log files using tail, no surprise there. Then, for deeper inspection, I executed this script from the same answer.

for log in /var/log/{syslog,kern.log}; do 
  echo "${log} :"
  sed -e 's/\[[^]]\+\]//' -e 's/.*[0-9]\{2\}:[0-9]\{2\}:[0-9]\{2\}//' ${log} \
  | sort | uniq -c | sort -hr | head -10
done

The process took several hours. The output was in the line of,

/var/log/syslog :
71209229  Rafid-Hamiz-Dell kernel:  sda3: rw=1, want=7638104968240336200, limit=1681522688
53929977  Rafid-Hamiz-Dell kernel:  attempt to access beyond end of device
17280298  Rafid-Hamiz-Dell kernel:  attempt to access beyond end of device
   1639  Rafid-Hamiz-Dell kernel:  EXT4-fs warning (device sda3): ext4_end_bio:317: I/O error -5 writing to inode 6819258 (offset 0 size 4096 starting block 54763121030042024)
       <snipped>

/var/log/kern.log.1 : 71210257 Rafid-Hamiz-Dell kernel: attempt to access beyond end of device 71209212 Rafid-Hamiz-Dell kernel: sda3: rw=1, want=7638104968240336200, limit=1681522688 1639 Rafid-Hamiz-Dell kernel: EXT4-fs warning (device sda3): ext4_end_bio:317: I/O error -5 writing to inode 6819258 (offset 0 size 4096 starting block 954763121030042024)

(/dev/sda3 is my home directory. As we can find,

lsblk /dev/sda
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 931.5G  0 disk 
├─sda1   8:1    0 122.1G  0 part /
├─sda2   8:2    0   7.6G  0 part [SWAP]
└─sda3   8:3    0 801.8G  0 part /home

Why a process will want to write beyond the limit is actually outside the scope of my comprehension. Perhaps I will want to ask a different question in this forum if this continues even after a system update.)

Then, from this answer (you may want to check this for a deeper understanding), I executed,

sudo su -
> kern.log
> syslog

Now, these files have zero sizes. The system is running fine before and after a reboot.

I will watch these files (along with others) in the next few days and report back should
they behave out-of-line.

As a final note, both the offending files (kern.log and syslog), are set to be rotated, as inspection of the files (grep helped) inside /etc/logrotate.d/ shows.

UPDATE 2

The log files are actually rotated. Looks like the large sizes were attained on a single day.

Masroor
  • 3,143
  • 2
    Is there anything in those log files that lends a clue as to why they are so large? Delete and reboot, then monitor them to see if they grow in some exponential fashion. – douggro Aug 23 '14 at 16:13
  • @douggro Indeed there are. Please see my update to the question. – Masroor Aug 24 '14 at 00:58
  • 1
    I had this issue and it was because of loads of docker-containers running in background.. – Bhaskar Apr 14 '20 at 15:14

4 Answers4

85

Simply delete these files and then reboot?

No. Empty them but do not use rm because it could end up crashing something while you are typing the touch command to recreate it.

Shortest method:

cd /var/log
sudo su
> lastlog
> wtmp
> dpkg.log 
> kern.log
> syslog
exit

If not root it will require sudo. Taken from another answer on AU.

BEFORE YOU DO THAT. Do a tail {logfile} and check if there is a reason for them to be so big. Unless this system is several years old there should be no reason for this and fixing the problem is better than letting this go on.

Both kern.log and syslog should normally not be that big. But like I said: if this system is up and running for years and years it might be normal and the files just need to be cleared.

And to prevent it to become that big in the future: setup logrotate. It is pretty straightforward and will compress the logfile when it becomes bigger then a size you set it to.


1 other thing: if you do not want to delete the contents you can compress the files by tarring or gzipping them. That will have you end up with files probably 10% of what they are now. That is if there is still room on the disk to do that.

Rinzwind
  • 299,756
  • 7
    wtmp: Command not found Which package is this? – Janus Troelsen Jul 07 '15 at 23:12
  • /var/log/wtmp is not a command but a log file. Where does my answer state you can execute wtmp? ;-) – Rinzwind Jul 08 '15 at 06:41
  • 14
    I thought > was a prompt and tried "lastlog" and it worked, so I assumed that I understood correctly :P – Janus Troelsen Jul 08 '15 at 17:39
  • This issue is keeps happening to me. I'm using ubuntu 16.04. Could you tell what seems to course this. Thanks in advance! – Gayan Jan 14 '17 at 09:56
  • I/O errors will be hardware related. Faulty cable. Faulty hard disk. Or a faulty filesystem. "attempt to access beyond end of device" seems serious. – Rinzwind Jan 14 '17 at 10:31
  • @Gayan hi there ! I was looking at the errors that you provided in original question. Looks like something was writing to same inode, 6819258 . Check if that the same inode in your 16.04. Regardless if it is the same or different, consider checking to what file does this inode belong , see this for a few methods how to do so. Maybe checking what file is being written to might shed a clue on the cause of the issue. Also, don't discount Rinzwind's suggestion - it could potentially be related to hardware – Sergiy Kolodyazhnyy Jan 14 '17 at 12:46
  • @Gayan did you ever do a file system check? do a sudo touch /forcefsck and reboot. It will start a file system check :) – Rinzwind Jan 14 '17 at 12:51
  • I actually ran into problems using touch to recreate /var/log/syslog as you warn about. +1 for belated education :) – WinEunuuchs2Unix Jan 10 '18 at 01:47
  • Unfortunately, this solution does not work on Ubuntu 18.04. – Luís de Sousa Aug 27 '18 at 10:13
  • Then you are doing something wrong. Since these are core Linux tools they work on almost any Linux :) – Rinzwind Aug 27 '18 at 10:18
  • 21
    This answer does not adequately describe what you are supposed to do with lastlog, wtmp, dpkg.log, kern.log and syslog. – Tor Klingberg Dec 21 '18 at 12:31
  • @TorKlingberg that was not the question so the answer indeed does not reflect that – Rinzwind Dec 21 '18 at 15:05
  • 1
    @TorKlingberg thanks for your comment, took me time to understand this.... you can clear the log file by excecuting > logfilename like explained here – Chagai Friedlander Feb 23 '21 at 22:41
  • 2
    I can't remember what I meant with my comment from two years ago, but apparently 15 other people agreed, so I guess it stays. Perhaps I didn't understand that > redirects nothing into the file, and though it was a prompt. – Tor Klingberg Feb 24 '21 at 17:55
31

It's probably worth trying to establish what is filling the log(s) - either by simply examining them visually using the less or tail command

tail -n 100 /var/log/syslog

or if the offending lines are too deeply buried to easily see what's occuring, something like

for log in /var/log/{dmesg,syslog,kern.log}; do 
  echo "${log} :"
  sed -e 's/\[[^]]\+\]//' -e 's/.*[0-9]\{2\}:[0-9]\{2\}:[0-9]\{2\}//' ${log} \
  | sort | uniq -c | sort -hr | head -10
done

(note: this may take some time, given such large files) which will attempt to strip off the timestamps and then count the most frequently occurring messages.

steeldriver
  • 136,215
  • 21
  • 243
  • 336
15

My method for clean system log files is this. Steps 1 and 2 are optional, but sometimes you need check older logs and backup is sometimes useful. ;-)

  1. Optional: Copy log file

    cp -av --backup=numbered file.log file.log.old
    
  2. Optional: Use Gzip on copy of log

    gzip file.log.old
    
  3. Use /dev/null for clean file

    cat /dev/null > file.log
    

And we use for this logs (only on several servers) logrotate and weekly execute by cron script which all files with *.1 (or next rotated) compress by gzip.

zorbon.cz
  • 1,177
8

I installed Ubuntu 16.04 today and I noticed the same problem. However, I fixed this with busybox-syslogd. Yup! I've Just installed that package and problem has been solved. :)

$ sudo apt-get install busybox-syslogd

After installing that package, reset syslog and kern.log:

sudo tee /var/log/syslog /var/log/kern.log </dev/null

I hope this simple solution is useful to other people around.

David Foerster
  • 36,264
  • 56
  • 94
  • 147
omluce
  • 89
  • 5
    What, exactly, does this package do, and how does this solution work? – Aaron Franke Nov 21 '16 at 20:19
  • 2
    I am dubious about this post since those files wouldn't have a chance to grow large in a single day. So I will hold off until I hear from others about this program. – SDsolar Sep 02 '17 at 20:21