Script to periodically delete files in a specific directory

DeadnightWarrior

New Member
Joined
Jul 15, 2022
Messages
3
Reaction score
0
Credits
32
Hi all,
First of all I'm not a Linux expert so I apologize for not being too accurate or informed about CLI and such.

My company sells services that are commonly managed by physical or virtual machines running a customized CentOS 6 distro.
All of our software is Java based and installed under the /opt directory.
The sudo command is usually not active, so we tipically use "su" if we need root privileges, making sure we "exit" when the task is done.

Here's my issue:
Sometimes our programs tend to write a ton of logs in a specific directory, ending up in full disk spaces after some months.
We can't just wipe the entire directory clean, as we need to have at least the latest log for each type (you can have something like db_log, system_log, user_log, etc, all with their timestamp).

How could I write a script that deletes everything BUT the latest logs (or even everything except all files from less than 7 days ago, for example)?
It would be easy to write something like "rm -rf *" and schedule it to run once a day but it's not what I'm looking for.

Any help would be greatly appreciated.

Thank you!
 


I'm not a log expert, but most Linux distros rotate out logs on a scheduled basis, like daily or weekly.

Is there a specific log that seems to be taking up most space? If you know that, you can determine which daemon to reconfigure logging for.

On my systems, I find that /var/log/journal (part of SystemD init) tends to take up the most space, by far. But I'm just dealing with desktops, not servers.

If you need to maintain logs, another option you might consider is increasing the partition size.
 
I'm not a log expert, but most Linux distros rotate out logs on a scheduled basis, like daily or weekly.

Is there a specific log that seems to be taking up most space? If you know that, you can determine which daemon to reconfigure logging for.

On my systems, I find that /var/log/journal (part of SystemD init) tends to take up the most space, by far. But I'm just dealing with desktops, not servers.

If you need to maintain logs, another option you might consider is increasing the partition size.
The logs I'm talking about are specific to our software, they pile up under whatever directory the program is installed in.
We will definitely ask the developers to reduce the amount of logs in future versions but for now, I'm trying to find a workaround.
 
Here you go:
Bash:
#! /bin/bash

# Get last Monday's date as unix timestamp (separated bash expansion)
iiLastMonday=$(date -d 'last Monday' '+(%s)'); iiLastMonday=${iiLastMonday:1:-1}

# Make a dir to store the discards (for safety over deleting them).
if [ ! -d old-logs ]; then mkdir old-logs; fi

# Now we want to go through everything
for iiCurFile in *; do
        if [ ! -f "$iiCurFile" ]; then continue; fi

        # Get the last modification time from current file (and print info for your reference)
        iiLastMod=$(stat --format=%Y "$iiCurFile")
        printf "\nFile: $iiCurFile\nDate: $iiLastMod\nStatus:"

        # You can change the move to delete, or expand on this, like tar+xz'ing log dirs
        if [ "$iiLastMod" -lt "$iiLastMonday" ]; then mv "$iiCurFile" "old-logs/$iiCurFile" printf " too old\n"; else printf " young enough\n"; fi

        # So you can see what's happening
        sleep 3
done
Run it as a cron job or whatever you wish.
Notes: I wrote it "safely" (no deleting). Modify it as you see fit.
 
Last edited:
Does logrotate work in /var/logs only?

'Cause our logs are actually located in /opt/[oursoftwarename]/[...]/logs , it has nothing to do with /var .
You can configure logrotate to work anywhere so even in /opt but you will need to configure selinux with the correct contexts when running a Rhel clone.
 

Members online


Top