Logo - Japanese Zodiac Sign

Backup on a Strato root-server

Automatic backups on a Strato root-server with OpenBSD.

de en

Automatic backups to the backup-FTP from Strato

All data that really matters to you should be backed-up frequently. If the server is from the german hoster Strato the available FTP-Server is the best way to go.

Security

The username, password and hostname can be fetched from the website where the server configuration is done as soon as the backup server is configured. The access to this server from outside the Strato data center, it has to be done from the root-server.

The username and password used for login should be secure enough with this restriction, but automatic backups are the prefered way these have to be stored somewhere on the server. So if the server is compromised, the password and username for the backup-FTP are compromised too. The backups should be checked if something like that happened.

The Tools

  • Now OpenBSD should be the running OS of the server, so the handy ftp client from the base install can be used to transfer data to the backup-FTP. Whith that client, the transfer can be scripted.
  • As a backup program dump and restore come to mind. These are also in the base install. The here described backup procedure backs up single files, whole slices could be backed up too, but in that case an incrementing backup would be the prefered way. An incrementing backup is not possible if single files are fed to dump.

These tools are bind together with a little script. This can be done with the in the base install included Korn Shell.

Prerequesites

First the access to the backup-FTP server should be automated. For this the following entry into the file .netrc (in the home directory of the user doing the backup) must be made:

machine backup.serverkompetenz.de login USER password SECRET

With USER and SECRET replaced with the login information that were given by Strato. The login to that server should happen automated from now on. A directory must be created on the backup-FTP. For simplicity I have named it backup. The script has to be edited if another directory is chosen.

In the next step the filenames, that are to be backed-up are written to a file. For each slice on the server a new file must be used because dump can not work on several filesystems. These files are stored in /root/data. The names for the files are chosen from the mount points of the filesystems: backup-root for /; backup-home for /home, etc. As an example the first lines of my backup-root:

/root/.*
/root/*
/etc/boot.conf
/etc/fstab

With these entries the home directory from root and the files boot.conf and fstab inside the directory /etc are marked for backup. The following script builds from these entries an argumentlist for dump. dump makes a backup of these files into a temporary file. This file is removed after the transfer finished.

The Script

#!/bin/sh

tstamp=$(date +%F)
expire_date=$(date -r $((`date +%s` - 2592000)) +%F)

if [ ! -d /root/data/ ]; then
        echo "ftpdump: /root/data/ does not exist."
        exit 1;
fi

for filelist in /root/data/*; do

        files=`cat ${filelist}`
        ftpfile=`echo $filelist | sed -e 's:/root/data/\(.*\):\1:'`

        tmp_file=`mktemp`

        /sbin/dump -a -f $tmp_file ${files}
        if [ $? == 0 ]; then
                /usr/bin/ftp <<-EOF
                        open backup.serverkompetenz.de
                        epsv
                        binary
                        verbose
                        put "|gzip -9 -c ${tmp_file}" backup/${ftpfile}-${tstamp}.gz
                        prompt
                        del backup/$ftpfile-${expire_date}.gz
                        close
                EOF
        fi

        rm ${tmp_file}
done

From the actual date a filename for the backup is generated. To spare as much space as possible on the server the files are piped through gzip.

After the file was transfered to the server the script checks for a backup file that is 30 days older is on the server. Actually there is no check, the script attempts to remove the file from the server. Does no such file exist the server will output an error message which has no effect on the further execution of the script.

Unfortunately I did not manage to pipe the output from dump to the ftp client. Therefor enough space has to be available in /tmp or another way using temporary files would have to be implemented. If someone works out a better solution I would like to use it here. verwenden.

At Regular Intervals?

If the script is to be started on a daily basis only an entry in the crontab from root is missing. Aware: HOME=/root was added to the to override the default /var/log. If this is not done the .netrc from roots home directory can not be used. The necessary crontab entry looks somehow like this:

45	4	*	*	*	HOME=/root /bin/sh /root/bin/ftpdump

Every morning at 4:45 am a backup would be started and transfered to the ftp server.