Backup and Restore Hooks

all_backups_(pre|post).sh

This script, if it exists, is run before/after all Admin or Reseller backups complete.

It is purposed to run after a large number of backups are created, hence it's not called for the User Level backups.

Environment variables

The parameters passed are the same values that are passed to DA via the task.queue for backup creation:

  • type (admin|reseller): type of backup
  • owner: owner of backup
  • when (cron|now): is backup immediate or cron schedule
  • if cron is selected, the cron values are passed:
    • minute
    • hour
    • dayofmonth
    • month
    • dayofweek
  • append_to_path: pattern to append path more about path customization
  • where (ftp|local): is backup local or done using ftp
  • if ftp is used:
    • ftp_username
    • ftp_password
    • ftp_path
    • ftp_ip
    • ftp_port
    • ftp_secure
  • local_path: path used in local backups
  • who (all|except|selected): which users to backup.
  • select[X]: selected user. created for every selected user when who is set to who except or selected
  • what (all|what_select): what user data to select for backup
  • option[X]: selected data. present when what is set to what_select

For all_backups_post.sh, 2 additional variables are available:

  • success(1|0): 1 meaning all is well, and 0 meaning there was an error somewhere
  • current_result: will contain a string that DA will output in the notification... could be blank (although not likely)

all_restores_post.sh

This hook script is called after all Admin or Reseller restores.

Environment variables

  • ip_choice(select|file): restore user with ip from the file or from selection
  • ip(valid ip or "free_random"): when ip_choice="select" is set, DA will restore the User with the selected IP, if set to "free_random" restore the User with a random free IP
  • type (admin|reseller): type of backup
  • where (ftp|local): is backup local or done using ftp
  • if ftp is used:
    • ftp_username
    • ftp_password
    • ftp_path
    • ftp_ip
    • ftp_port
    • ftp_secure
  • local_path: path used in local backups
  • select[X]: selected user backup file relative to path. created for every selected backup file

backup_save_pre.sh

This script is called just before a backup job is saved, either to a cron or to the task.queue (applicable to both creation and modification of backups as well as backup crons):

Environment variables

Data passed to the script will match exactly what is given to the task.queue upon backup creation, unless "when=cron" is set, then it will only be saved (no task.queue entry).

  • type (admin|reseller): type of backup
  • owner: owner of backup
  • when (cron|now): is backup immediate or cron schedule
  • if cron is selected, the cron values are passed:
    • minute
    • hour
    • dayofmonth
    • month
    • dayofweek
  • append_to_path: pattern to append path more about path customization
  • where (ftp|local): is backup local or done using ftp
  • if ftp is used:
    • ftp_username
    • ftp_password
    • ftp_path
    • ftp_ip
    • ftp_port
    • ftp_secure
  • local_path: path used in local backups
  • who (all|except|selected): which users to backup.
  • select[X]: selected user. created for every selected user when who is set to who except or selected
  • what (all|what_select): what user data to select for backup
  • option[X]: selected data. present when what is set to what_select

cmd_(site|user)_backup_pre.sh

Calls to CMD_SITE_BACKUP / CMD_API_SITE_BACKUP have their own hook, /usr/local/directadmin/scripts/custom/cmd_site_backup_pre.sh.

And for CMD_USER_BACKUP / CMD_API_USER_BACKUP, the hook is /usr/local/directadmin/scripts/custom/cmd_user_backup_pre.sh.

Any non-zero exit code will echo any output to the GUI and abort whatever the action was.

Note, this is also called for the default GET to show the backup page, so ensure you're checking the method and action accordingly.

Environment variables

All variables that were passed via GET or POST will be included in the request.

DA will also set:

  • username: name that is logged in who called the CMD
  • method: GET|POST

user_backup_compress_pre.sh

This script will be called, if it exists, just before the creation of the tar.gz file. It's called just after the assembly of the "backup" folder.

Environment variables

  • username: DA user
  • reseller: reseller who owns the user
  • file: full path to backup file

user_backup_failed.sh

This hook will be called if there are any issues during the creation of the User Backup. This also includes errors that may arise for the Reseller/Admin Level portions of the User backup.

Ftp uploading is not part of the check as this is done outside of the backup function.

Environment variables

  • username: name of the user being backed up (which failed)
  • reseller: name of the account that created the User
  • owner: owner of the directory/files.. mainly for write-access purposes. (owner of the backup path). I if using ftp, this value will be "diradmin"
  • dest_path: where the backup is going. If the value is blank (""), then it's going to /home/username/backups
  • error_result: the actual error message as to why this is being triggered.

user_backup_(pre|post).sh

The user_backup_pre.sh script will be called before anything is done for user backup creation. A non-zero return value will abort user backup creation.

When skipping a suspended User, the user_backup_pre.sh won't be run, since the abort happens before that.

The user_backup_post.sh script will be called after each user tar.gz backup file is created (any format from any level).

Environment variables:

  • username: username
  • reseller: resellername
  • file: /path/to/the/user.tar.gz

user_backup_success.sh

Script to be called after the backup of each User only if the backup succeeded. Any failures in the process will prevent this script from being called.

Environment variables

  • file: path to tar.gz file
  • username: DA user the backup is being made of/for
  • reseller: reseller that owns the user
  • owner: user which initiated (empty string if the user themselves initiated the backup)

user_restore_fail_post.sh

This hook script is called after a DirectAdmin tar.gz backup restore fails.

The following cases to be included in the call of user_restore_fail_post.sh are mostly pre-check failures that occur before the actual restore process begins, and so are included in addition to failures during the actual "restore" process:

  • Invalid or missing tar.gz filename path
  • Username cannot be parsed from the filename.
  • Failure during creation of the account before the restore start
  • creator conflicts, Reseller bob already manages User fred, but Reseller george is trying to restore it
  • bad usertype settings from the live user.conf
  • memory allocation errors when creating a new User class instance
  • read errors of the live User account prior to restore

Environment variables

  • username: user to be restored
  • usertype (user|reseller|admin): type of restored user
  • reseller: creator of to be restored user
  • file: backup file name
  • source_path: source path of backup file
  • owner: user which initiated
  • reason: reason for failure

Note that the reason will include the
\n characters in it, as inserted into DA's message.

Note that owner and source_path will be empty strings if the restore is triggered by the User (User Level -> Create/Restore Backups). So be sure to check if owner is blank before trying to use the source_path (check it too).


user_restore_(pre|post).sh

The user_restore_pre.sh script is executed before a user restoration. Returning a non-zero exit value will abort the restore process and display any test/error.

The user_restore_post.sh hook script is executed after a user restoration.

Environment variables

  • username : username to be restored
  • reseller : user.conf creator
  • filename : full path to backup file

The user_restore_post.sh will also have these additional environment variables:

  • success: whether a restore worked (set to 1) or not (set to 0).
  • result_string: holds the result string displayed in DA noting what errors occurred, if any.

user_restore_post_pre_cleanup.sh

This script is called after the restore of a User account, but just before the path is cleaned up.

This is called just before user_restore_post.sh, which is triggered after the temporary extracted data is cleaned up.

Environment variables

  • username : username to be restored
  • reseller : user.conf creator
  • filename : full path to backup file

Examples

Script to check disk usage before creating any Backups.

This script is to be used to prevent backups from being created if your disk usage is too high. This does not work with "System Backup" (it already has its own check). It works with all 3 Levels of DirectAdmin Backups (Admin, Reseller, and User).

  1. Create the script:
/usr/local/directadmin/scripts/custom/user_backup_pre.sh
1
  1. In that script, add the code:
#!/bin/sh

PARTITION=/dev/mapper/VolGroup00-LogVol00
MAXUSED=90

checkfree()
{
        DISKUSED=`df -P $PARTITION | awk '{print $5}' | grep % | cut -d% -f1`
        echo "$DISKUSED < $MAXUSED" | bc
}
if [ `checkfree` -eq 0 ]; then
        echo "$PARTITION disk usage is above $MAXUSED% Aborting backup.";
        exit 1;
fi

exit 0;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16

Where you'd replace /dev/mapper/VolGroup00-LogVol00 with the filesystem you want to check. The MAXUSED value is the percentage threshold of the partition to be used. Chmod the script to 755.

To see the list of filesystem names, type:

df -hP
1

where the filesystem names are on the far left (partition names on the right)

Credit for this script: Dmitry Shermanopen in new window

I want to spread out the backup process over a longer period of time to lower the load on the box.

If you wish to keep your load average below a certain point, and not allow backups to run right away if the load is over a certain point, the following script will help. It will check your load average before each backup file is created. If the load is too high, it will wait for 5 seconds, then check again. It will continue this process until the load is low enough, then create the backup. If the load is not below the threshold after 20 attempts, a non-zero value is returned and that user backup is skipped and an error is returned in DA.

To use this script, place the following code into the file:

/usr/local/directadmin/scripts/custom/user_backup_pre.sh
1
#!/bin/sh
MAXTRIES=20
MAXLOAD=8.00

highload()
{
          LOAD=`cat /proc/loadavg | cut -d\  -f1`
          echo "$LOAD > $MAXLOAD" | bc
}

TRIES=0
while [ `highload` -eq 1 ];
do
          sleep 5;
          if [ "$TRIES" -ge "$MAXTRIES" ]; then
                    echo "system load above $MAXLOAD for $MAXTRIES attempts. Aborting.";
                    exit 1;
          fi
          ((TRIES++))
done;
exit 0;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21

Then chmod the new user_backup_pre.sh script to 755 and set the ownership to diradmin.

Limit the number of backups a User can create

If you want to limit the number of backups a User can created, to say 5, then you can create the following custom script to enforce it:

/usr/local/directadmin/scripts/custom/user_backup_pre.sh
1

and add the code:

#!/bin/sh
MAX_BACKUPS=5
#check filename for /home/user/
U=`echo $file | cut -d/ -f3`
if [ "$U" != "$username" ]; then
   #file is not in our /home, so is Reseller or Admin backup.
   exit 0;
fi
#file is being created below this User.
C=`ls /home/$username/backups | wc -l`
if [ "$C" -ge "$MAX_BACKUPS" ]; then
   echo "Too many backups. Delete some from /home/$username/backups before creating another.";
   exit 1;
fi
exit 0;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

Now chmod the script to 755 and set the ownership to diradmin.

You can manually test the script like this:

file=/home/fred/backups/backup.tar.gz username=fred ./user_backup_pre.sh; echo $?;
1

Which will either output 0, which means the script allowed creation, or the "Too many backups" error, followed by a "1", where 1 indicates the exit status is an error and the script tells DA to abort the backup.

Last Updated: 6/23/2021, 9:36:08 PM