Backup to remote locations

As of today the only one option is available for remote backups out of the box - transferring to remote FTP server. The option is configured from Admin Backup/Transfer GUI interface. The FTPS could be used.

Manually testing the ftp_upload.php script

If you're trying to debug ftp backup upload issues, you can manually test the ftp_upload.php in an attempt to get more information as to what's going on.

To do this, type the following, and replace the values in with the values for your setup.

cd /usr/local/directadmin/scripts
ftp_port=21 ftp_local_file=/path/to/a/file.txt ftp_ip=1.2.3.4 ftp_username=fred ftp_password_esc_double_quote=fredspass ftp_path=/remote/path ftp_secure=ftps ./ftp_upload.php

which should run, and if there are issues, should be displayed to the screen.

Also check your ftp logs on the remote server, and even run the remote ftp server in debug mode.

Environmental variables for ftp_upload.php

If you're making custom changes to the ftp_upload.php, you'll want to know all the variables you get to play with. Below is a sample list of variables and values for a cron ftp backup, with ID number 1, for a single selected testuser:

action=backup
append_to_path=nothing
database_data_aware=yes
dayofmonth=5
dayofweek=*
email_data_aware=yes
ftp_ip=127.0.0.1
ftp_local_file=/home/tmp/admin/user.admin.testuser.tar.gz
ftp_local_path=/home/tmp/admin
ftp_password=pass"word
ftp_password_esc_double_quote=pass\"word
ftp_path=/admin_backups
ftp_port=21
ftp_remote_file=user.admin.testuser.tar.gz
ftp_username=admin
hour=5
id=1
minute=0
month=1
owner=admin
select0=testuser
type=admin
value=multiple
when=now
where=ftp

This output was generated by doing:

cd /usr/local/directadmin/scripts/custom
cp /var/www/cgi-bin/printenv ftp_upload.php
echo "exit 1;" >> ftp_upload.php
chmod 755 ftp_upload.php
./ftp_upload.php   

In my case, I did need change the 1st shebang line of the file from #!/usr/local/bin/perl to #!/usr/bin/perl, but you may already have the perl binary.

Once created, run your desired backup with ftp, and it will throw an error, and you'll then see all the variables you get for that backup. Delete the custom ftp_upload.php, and create your own (or copy the default value from one directory above).


The typical need for this would be if you want different backup IDs to take a different action. Say you want backup ID 1 to be uploaded via scp rather than ftp, this would let you add a line:

if [ "$id" = "1" ]; then
   #scp upload code
   exit 0;
fi

before the ftp section.. so ID 1 uses scp, and everything else uses ftp (just as an example) You can also do any check on any other variable, like username, password, remote IP, path, etc..

Note, if you use "When: Now", then no "id" will be passed. But if you select an existing cron backup, and click "Run Now" for the existing cron ID, the id will be passed.

How to convert ftp_upload.php to use ncftpput or curl instead of php

Note, this guide is getting old and should only be used as a general guide on how you might edit the ftp_upload.php. Our newer ftp_upload.php supports curl if you select FTPS, and uses php uploads if you select just FTP.

Using the scripts below may work in some cases, but can also break the "Append to Path" option, as they don't handle it correctly, causing backups like:

Mondayuser.admin.fred.tar.gz

which is not likely formatted as intended.


Php does support ftp uploads, and this is what DA uses for file uploads to remote ftp servers. In some cases, the errors generated by php are not sufficient to debug a problem with ftp, so converting this script to use a different ftp client may help.

You can convert the /usr/local/directadmin/scripts/ftp_upload.php to use curl instead of php by first copying it to custom location:

cp -rp /usr/local/directadmin/scripts/ftp_upload.php /usr/local/directadmin/scripts/custom/ftp_upload.php

and making the copy's contents look like this:

/bin/sh
# CURL Backup Transfer
# Version 0.1a
# Copyright 2006, Sensson (www.sensson.net)
#
# This script makes it possible to transfer
# backups using your secondary uplink
# like eth1.

ETH=eth0
CURL=/usr/local/bin/curl

result=`$CURL --interface $ETH -T $ftp_local_file -u $ftp_username:$ftp_password_esc_double_quote ftp://$ftp_ip$ftp_path$ftp_remote_file 2>&1`

if grep -q -o -i "curl: (67) Access denied: 530.*$$" <<< "$result"; then
          echo "FTP access denied. Please check your login details."
          exit 1
fi
if grep -q -o -i "curl: (6) Couldn't resolve host.*$$" <<< "$result"; then
          echo "Host could not be resolved. Please check your host details."
          exit 1
fi
if grep -q -o -i "curl: (9) Uploaded unaligned file size.*$$" <<< "$result"; then
          echo "File could not be uploaded. Please check your path."
          exit 1
fi
if grep -q -o -i "curl: Can't open.*$$" <<< "$result"; then
          echo "Can't open $ftp_local_file"
          exit 1
fi

Be sure to set the ETH value appropriately to your network device. Also, the ftp_path value must have a trailing slash to work correctly.

You can remove the --interface $ETH portion of the command if you do not need to specify any interface for curl to bind to.

Reference from: http://www.directadmin.com/forum/showthread.php?s=&threadid=11385open in new window

The same method can be used, to convert the script to ncftpput instead of curl or php. Edit the ftp_upload.php script and insert the following code instead:

/bin/sh
/usr/bin/ncftpput -t 25 -m -u "$ftp_username" -p "$ftp_password_esc_double_quote" "$ftp_ip" "$ftp_path" "$ftp_local_file" 2>&1
RET=$?
exit $RET

Lastly, the original method of this setup is to use php:

/usr/local/bin/php
<?

$use_pasv = true;

$ftp_server = getenv("ftp_ip");
$ftp_user_name = getenv("ftp_username");
$ftp_user_pass = getenv("ftp_password");
$ftp_remote_path = getenv("ftp_path");
$ftp_remote_file = getenv("ftp_remote_file");
$ftp_local_file = getenv("ftp_local_file");

$conn_id = ftp_connect($ftp_server);
if (!$conn_id)
{
          echo "Unable to connect to $ftp_servern";
          exit(1);
}

$login_result = ftp_login($conn_id, $ftp_user_name, $ftp_user_pass);

if (!$login_result)
{
          echo "Inavalid login/password for $ftp_user_name on $ftp_server";
          ftp_close($conn_id);
          exit(2);
}

ftp_pasv($conn_id, $use_pasv);

ftp_mkdir($conn_id, $ftp_remote_path);

if (!ftp_chdir($conn_id, $ftp_remote_path))
{
          echo "Invalid remote path '$ftp_remote_path'";
          ftp_close($conn_id);
          exit(3);
}

if (ftp_put($conn_id, $ftp_remote_file, $ftp_local_file, FTP_BINARY))
{
          ftp_close($conn_id);
          exit(0);
}
else
{
          $use_pasv = false;

          ftp_pasv($conn_id, $use_pasv);

          if (ftp_put($conn_id, $ftp_remote_file, $ftp_local_file, FTP_BINARY))
          {
                    ftp_close($conn_id);
                    exit(0);
          }
          else
          {
                    ftp_close($conn_id);
                    echo "Error while uploading $ftp_remote_file";
                    exit(4);
          }
}

?>

How to convert ftp_download.php to use ncftpget instead of php

If you wish to use a different ftp downloader for backup files, type, ncftpget can be used to replace php. To convert to ncftpget create the /usr/local/directadmin/scripts/custom/ftp_download.php and add the following code:

/bin/sh

FTPGET=/usr/bin/ncftpget
TOUCH=/bin/touch
PORT=${ftp_port}

if [ ! -e $TOUCH ] && [ -e /usr/bin/touch ]; then
       TOUCH=/usr/bin/touch
fi

if [ ! -e $FTPGET ]; then
       echo "";
       echo "*** Backup not downloaded ***";
       echo "Please install $FTPGET by running:";
       echo "";
       echo "cd /usr/local/directadmin/scripts";
       echo "./ncftp.sh";
       echo "";
       exit 10;
fi

CFG=${ftp_local_file}.cfg
/bin/rm -f $CFG
$TOUCH $CFG
/bin/chmod 600 $CFG
/bin/echo "host $ftp_ip" >> $CFG
/bin/echo "user $ftp_username" >> $CFG
/bin/echo "pass $ftp_password_esc_double_quote" >> $CFG

$FTPGET -C -f $CFG -V -t 25 -P $PORT "$ftp_ip" "$ftp_path/$ftp_remote_file" "$ftp_local_file" 2>&1
RET=$?

/bin/rm -f $CFG

exit $RET

And make it executable:

chmod 755 /usr/local/directadmin/scripts/custom/ftp_download.php

How to slow down the rate and not flood remote ftp

If you've got many User to be backed up, you may want to slow down the rate of the backup process in order to implement a pause after each User. This will reduce the load on the server by allowing any queue processes to catch up, but will also slow the rate that the backup system connects to the remote ftp server (if you're using that option). Sometimes the ftp server limits the connection rate and may block the backup if the rate becomes too fast.

To add a pause between each backup, create the file /usr/local/directadmin/scripts/custom/user_backup_post.sh file and add the code:

#!/bin/sh
sleep 20
exit 0;

And make it executable:

chmod 755 /usr/local/directadmin/scripts/custom/user_backup_post.sh

This will pause the backup process for 20 seconds after each user tar.gz is created.

How to build a redundant backup server

Many server admins are always looking to add a level of redundancy to their systems. DirectAdmin is ultimately designed around a single-server principle, and mirroring of User data is not a default option in DA.

The Multi Server Setup page offer dns clustering to mirror dns zones, and there is an option to run MySQL on a remote box, but neither will actually mirror the User's web data.

There are a few options admins can use to actually create a redundant setup, none of which are setup by default with the DirectAdmin design.

  1. Using the Admin Backup/Transfer tool, nightly backups can be made and transferred to a remote DirectAdmin box. This was only intended to be for backing up your data, however you can create an automated cron restore to take these backups and insert them into another DirectAdmin box after the backups have been transferred, thus giving you a "mirror" of the other box, but with a different IP. (Note that this will mess up the IPs if this backup box is used with the multiserver setup dns, since the local IP will be used from the restore, overriding the remote IP of the dns cluster). To create a cron restore, go to Admin Level -> Admin Backups/Transfers, and restore these backups files as you normally would. Very quickly after issuing the restore, type
cat /usr/local/directadmin/data/task.queue

before the dataskq runs it when the next minute flips over. The code that is output will be what you add to the task.queue in your cron each time you want to restore a backup.

  1. Another method, which is probably far cleaner, doesn't require 2 DirectAdmin licenses, and uses far less bandwidth, but might be more tricky to setup, would be to use rsync. Rsync is an open source program that most system already have which is used to transfer data between systems quickly, efficiently and securely. It's benefit is that it checks to see which files have been changed, which files have not been changed, and will only transfer the updated data, saving you a lot of bandwidth. I won't get into how to use rsync, as there are hundreds of guides online, just search for them. You will need to use the path listopen in new window to know which files to copy over. Just be very careful with /etc/passwd, /etc/group, /etc/shadow, etc.. as any system files you overwrite can potentially take down your system if you get it wrong. I would probably recommend not transferring system files, like those, if you're not completely confident in what you are doing.
Last Updated: