103

I just got a message from the default disk analyses software (Baobab) that I only have 1GB left on the hard drive. After some search, I found that the /var/log/ folder is the cause of this.

Some file/sizes in /var/log/:

  • kern.log = 12.6 GB
  • ufw.log = 12.5 GB
  • kern.log.1 = 6.1 GB
  • ufw.log.1 = 6.0 GB

Et cetera et cetera. /var/log is huge.

Can I delete those files or the entire /var/log folder? Or is that a BIG NO NO in Ubuntu?

Jjed
  • 13,874
blade19899
  • 26,704

7 Answers7

68

You must not remove the entire folder but you can remove "Old-Packed" log files without harming your system.

For a typical home user, it's safe to remove any log file that is compressed and has a .gz extension (as you can see in the picture).

These compressed log files are old logs that are gzipped to reduce storage space, and as an average user, you don't need them.

Select .gz extention

belacqua
  • 23,120
Salih Emin
  • 2,893
  • 21
    find /var/log -type f -name "*.gz" -exec rm -f {} ; – diyism Jun 24 '13 at 12:14
  • @diyism i tried your code, but not much help. my log dir still use 6GB space @_@ – GusDeCooL Jan 09 '14 at 04:11
  • 4
    find /var/log -type f -name "*.gz" -delete, I removed the compressed files and I only freed around 1 GB of space. Isn't 50 GB enough for the / dir and the rest of my disk for /home ! – Muhammad Gelbana Feb 02 '14 at 06:56
  • My mother's PC had a kern.log file 21 GB in size. A big kern.log indicates problem in the Linux kernel itself or in something it's experiencing issues in dealing with. In both cases, it's recommended to go to the Linux shell terminal and run cat /var/log/kern.log or nano /var/log/kern.log (at the GUI, run something like e.g. gedit /var/log/kern.log or mousepad /var/log/kern.log) and check what may be the problem. Once you figure out what's wrong you can then run sudo rm /var/log/kern.log ; sudo telinit 6 in order to delete such (big) file and restart the operating system. – Yuri Sucupira Mar 22 '17 at 04:35
  • 3
    In my case, this will remove only 15.7 MB of 41 files. The real problems here are messages (7.7 GB), user.log (7.7 GB), syslog (4.1 GB) and syslog.1 (3.5 GB). Those four files sum 23 GB. Any way to remove them, or at least reduce their size? – Rodrigo Sep 11 '17 at 18:39
  • @YuriSucupira I don't think it's a smart idea to cat (cat /var/log/kern.log) the whole log file – Ismail May 28 '18 at 07:44
  • @Ismail I know that running cat on a big file is gonna take a lot of time, but if such file is the only source of information you got about the issue that's affecting your OS then it might be the only way. Anyway, if one alternatively prefers to delete the log file except its last N lines, one can run a command like this: tail -N /var/log/kern.log |sudo tee /var/log/kernel.log. If e.g. one wants to keep only the last 1000 lines, just run tail -1000 /var/log/kern.log |sudo tee /var/log/kernel.logand the kernel log will be downsized to a file having only its last 1000 lines. – Yuri Sucupira May 28 '18 at 12:55
  • @Ismail Another good use for tail is to preserve the kern.log file while the user doesn't figure out what's wrong with the system. Such user may use grep into kernel.log's last N lines with a command such as tail -N /var/log/kern.log |grep -i word where N is the last N lines of the kern.log file and word is a word the user suspects (s)he may find into such lines, about the issue affecting the system. Last but not least, tail -N /var/log/kern.log |sudo tee /var/log/analysis.log will create analysis.log, which has only the last N lines of kernel.log. – Yuri Sucupira May 28 '18 at 13:01
  • Despite these big log files (which can just be deleted or alternatively "downsized" with tail as I explained above), a good "everyday use" app for cleaning the system is Bleachbit (check https://www.bleachbit.org), which you can install with sudo apt-get install bleachbit -y. – Yuri Sucupira May 28 '18 at 13:06
  • @YuriSucupira cat isn't the only simplified tool, there are multiple ways of doing this. It was just a simple suggestion that you shouldn't recommend people to use cat, use less instead. Yes for a big file it's smarter to use tail than cat since cat outputs everything and only the last x lines(dependent on your terminal settings) will be visible. – Ismail May 29 '18 at 08:30
  • @Ismail I use less (and more) in some contexts, but because the OP / question refers to a massively big log file I would never use less: in such context, using less would cause it to take forever to find anything relevant inside such massive log file. Anyway, you're right about tail, it is smarter than using cat. ^..^ – Yuri Sucupira May 30 '18 at 00:35
  • As @Rodrigo mentioned above, this only removes the smallest files. This answer from richvdh below actually solves it. https://askubuntu.com/a/100014/315699 – Homero Esmeraldo Sep 11 '20 at 14:23
40

I wouldn't delete the entire /var/log folder - that will break things.

You could just destroy the logs as @jrg suggests - but unless the things writing to the log files (mostly syslogd) are restarted that won't actually regain you any disk space, as the files will continue to exist in a deleted state until the filehandles are closed.

Better would be to find out why the logs aren't being rotated (and later deleted). logrotate is supposed to do this for you, and I suspect it's not being run each night as it should.

First thing I would do would be:

sudo /etc/cron.daily/logrotate

This should rotate the log files (so kern.log becomes kern.log.1); and you can then delete kern.log.1 etc to free up the disk space.

If everything is good so far, the next question is why this isn't happening automatically. If you turn your computer off at night, make sure you have anacron installed.

jokerdino
  • 41,320
richvdh
  • 973
39

DISCLAIMER: I am not an expert on this, use at own risk!

After finding that my /var/log/journal folder was taking several GB, I followed:

https://ma.ttias.be/clear-systemd-journal/

journalctl --vacuum-time=10d

which cleared 90%+ of it

eldad-a
  • 545
19

You should look at the logs and see what is getting written to them. My guess is ufw/iptables (you are logging all network traffic).

ufw - when you log all packets, you will get large logs. If you are not going to review the logs, turn logging off. If you wish to monitor your network, use snort. Snort will filter through the thousands of packets you receive and alert you to potentially problematic traffic.

My guess it that ufw is the culprit and you are getting a large log in kern.log because you are logging packets there as well.

Sometimes there is a kernel or hardware problem that fills the logs. In that event it is best to fix the problem or file a bug, you will need to review the logs to do that.

If you can not fix the problem, you can configure syslog to as to not fill your logs.

See http://manpages.ubuntu.com/manpages/precise/man5/syslog.conf.5.html

If you provide more details on the problem we can help debug it better.

Panther
  • 102,067
  • 2
    That's a very good point. It's worth finding out what's clogging up the logs rather than just deleting them. +1. – richvdh Jan 30 '12 at 23:06
8

Deleting /var/log is probably a bad idea, but deleting the individual logfiles should be OK.

On my laptop, with a smallish SSD disk, I set up /var/log (and /tmp and /var/tmp) as tmpfs mount points, by adding the following lines to /etc/fstab:

temp        /tmp        tmpfs   rw,mode=1777    0   0
vartmp      /var/tmp    tmpfs   rw,mode=1777    0   0
varlog      /var/log    tmpfs   rw,mode=1777    0   0

This means that nothing in those directories survives a reboot. As far as I can tell, this setup works just fine. Of course, I lose the ability to look at old logs to diagnose any problems that might occur, but I consider that a fair tradeoff for the reduced disk usage.

The only problem I've had is that some programs (most notably APT) want to write their logs into subdirectories of /var/log and aren't smart enough to create those directories if they don't exist. Adding the line mkdir /var/log/apt into /etc/rc.local fixed that particular problem for me; depending on just what software you have installed, you may need to create some other directories too.

(Another possibility would be to create a simple tar archive containing just the directories, and to untar it into /var/log at startup to create all the needed directories and set their permissions all at once.)

8

I had and issue with enormous log files (like 100G each) with some useless messages from gnome 'cannot find video buffer' or something. Doing:

sudo rm -rf /var/log/user.log sudo rm -rf /var/log/syslog sudo rm -rf /var/log/messages

did not solve the problem, but doing systemctl restart syslog.service right after freed space used by those files.

n.podbielski
  • 231
  • 3
  • 3
3

I had a several GBytes /var/log folder that I made shrink to less than 250MBytes using both /etc/systemd/journald.conf and logrotate:

config for journald: (cf man journald.conf)

SystemMaxUse=250M
SystemMaxFileSize=50M

config for /etc/logrotate.conf:

compress

/var/log/journal { daily dateext delaycompress copytruncate notifempty missingok rotate 3 size 500k sharedscripts }

also look into files in /etc/logrotate.d/*

rubo77
  • 32,486
  • This isn't a solution. Even if you make this change to logrotete configuration, logfiles can grow to multi GB between logrotete is run - typecally once a day. Look in the largest of the active logfile (with tail -f filename.log) to determine what are the real problem, and then fix this promlem. – Soren A Feb 21 '20 at 14:51
  • @SorenA $ du -sh /var/log now evaluates to 233M , whereas it was almost 5GB before. Maybe there remains issues for you, but it is a clearly a solution for the question about a "massive /var/log". – Stephane Rolland Feb 21 '20 at 15:16
  • 1
    It's not professional to remove the symptom instead of the cause.You will end up with log-files with all kinds of "noise", or so short-lived, so that the day your sever have a real problem, the log-files don't contain the needed histiric data, or you can't find them among all the garbage. – Soren A Feb 21 '20 at 17:35
  • @SorenA thanks for the remark. I peeked more closely at logs , and noticed a recurring error message which seems recurring often enough to be responsible for GBytes over months. Not crucial, but gotta fix this issue. – Stephane Rolland Feb 21 '20 at 22:25
  • don't limit it too much, or you will not be able to find noise making apps, e.g.: sudo sed s/#SystemMaxUse=$/SystemMaxUse=500M/g /etc/systemd/journald.conf -i – rubo77 Jun 01 '22 at 07:35