2

I have a former colleague that set up a crown job that run a logrotate every night. The problem is that this script takes some time to run and the timestamps in the file is stored in utc time and the script is running local time (cet) so the files contains log data 1-2 hours - the time to run the script (up to 5 min). The Cron job looks like this:

5 0 * * *       root    logrotate -f /path/to/log/settings && second_move_script

The settings file looks like this:

    daily
    rotate 12
    missingok
    notifempty
    delaycompress
    compress
    sharedscripts
    postrotate
            service service_name restart >/dev/null 2>&1 || true
    endscript

The second_move_script move the log files to a long time storage.

So what happens today is that data is spooled to the a log file at 00.00.00 does the script starts to run, where it first starts to spool data to a new file, yesterdays file is renamed to .1, the file from 2 days are temporary renamed to .2, the .2 file is now gziped and moved to the longtime storage.

Is there a built in function that could move the lines with the "wrong" timestamps from file .2 to file .1 or is this something that I should add to the second_move_script?

All the log files that already have 1-2 hours of yesterdays data in them, what is the best way to move those lines? I have manually moved a few days with the help from this post, is writing a shell script with grep -vE "the_correct_date" file1 > file2 the best way to go?

  • 1
    I think that you should handle log rotation inside service to be sure that every files contains exactly the stamps of the day, otherwise you have to post-process it in your second_move_script – Lety Mar 23 '20 at 13:28

0 Answers0