Version 1.39.0

Released: 2011-06-26



Admin Level -> Admin Backup/Transfer

For the Users table (list of all users on the system), use the show all users cache:


List files in set directory (either local path or ftp, whatever is set) and view current backup crons:

CMD_API_ADMIN_BACKUP with no options.


location=/home/admin/admin_backups (path to where the list of files is taken from)



num_files=123 representing the number of files that were found at that location.

The list of files starts at 0, until num_files-1, eg:


any ftp re-errors will be set into the filesX variables, so check them to ensure they're valid filenames.

As well, the cron data is returned:


where the # next to cron is the cron ID number. It is not sequential, even though it may appear to be.

Don't assume it counts up from 1.. scan each variable for all that start with "cron". Then the number next to it will be the cron ID number.

The info beside each cron is a url encoded list of all cron settings for that ID, with the ftp_password removed.

Create a backup, or create cronjob:



Instead of documenting every setting, please run DA in debug mode, execute a backup through the browser and note the variables output to the console.

Restore Backups:



as with the action=create, please run DA in debug mode for the full list of variable passed to DA with a browser.

Delete a backup cron



select0=123 (backup id)


Modify a cron:



see debug mode output (id=293)

Should be similar to cron creation.

Save a backup setting:



message=yes|no #send a message when backup has finished. If "no", message will only be sent if there is a backup error.

local_ns=yes|no #if yes, the local ns1/ns2 values are used. If no, the ns1/ns2 values from the backups are used.

Ability to set headers when sending emails with "email only" method new

Admin Level -> Show All Users

Reseller Level -> List Users


select the Users and click "Send Message" with the "email only" option

enable this functionality so that the emails can be html or custom headers added:

ability to add your own email headers in welcome messages

Note, you could already make an html email by simply adding:


as the very first line of the message, and DA would add the required headers, but now you can also add any header you want, eg:

|?HEADER=X-My-Header: purple-monkey|

but keep in mind that DA is fairly strict about header syntax and doesn't allow many special characters.

If you're having issues with this, try with a very basic character-set first, to determine if it's the characters used, or an error elsewhere.

If you're going to combine <html> with |?HEADER=..|, just make sure that <html> remains at the very first line, or else DA won't see it.

You can use |?HEADER=..| by itself if you want.

Default email account limit to 50 Meg (SKINS) new


will be the default internal directadmin.conf value set.

This represents the default number of Megabytes the quota field will show when creating a new email account.

The User can still change it to whatever they want, but should help limit out-of-control inboxes.

Note that this only affects the form and does not affect the internal creation code of an email account.

This means that any API creating an email account will not be affected by this change.

Any other skins that are not updated will not be an issue, they'll just default to whatever they were defaulting to before (likely 0)






<input type=text name=quota size=16 value="|POP_QUOTA|">

per-user filemanager_du new

per-user filemanager_du

Ability to enable/disable this option on a per-user basis, to allow more control of large accounts.

You can manually set:




into a User's user.conf file, eg:


overriding the directadmin.conf option "filemanager_du" (internal default set to 1 if not present in the directadmin.conf) new

Extra layer of security, should you wish to use it.

A strong password is good, but a strong password from approved IPs is better.

Integrated right into the user/pass authentication code, this new script allows the server admin to check the remote ip, user, password, referer of the connection.

If the script exists, it will be called for all requests, since authentication is done for each request (even with sessions).

The script will enable the server admin to allow or deny a request based on any criteria they wish.

For example, if you know that admin should only be logging in from a specific range of IPs, you can write code to check the IP, compare it to the list, and appove/deny the request.

The benefit of this method of filtering is that if you deny the request (exit with a non-zero result) the standard login errors will appear. The person attempting to login will assume they've got an invalid password, and not realize they may be filtered based on their IP.

Any non-zero exit code here will count against the brute-force check, even if the correct password is passed.

For non-zero exit code will add an entry into your error.log with any text you echo.

This is run before any passwords are even checked.

This is run before the demo accounts are checked, so if you use a demo, ensure to allow demo_user, demo_admin, demo_reseller before you do your normal checks.

Sample /usr/local/directadmin/scripts/custom/ script:


$user = getenv('username');
$ip = getenv('ip');

$my_ip = "";

if ($user == 'demo_user' || $user == 'demo_reseller' || $user == 'demo_admin')
       //not worried about demos

if ($ip != "")
        echo "Invalid IP";


Once saved, be sure to the to 700:

chmod 700 /usr/local/directadmin/scripts/custom/



based on CMD_PUBLIC_STATS for creating the /stats link for webalizer/awstats.

if nothing is passed, DA will return:




If stats=awstats is present, then you must use path=awstats, as mentioned below.

If you have webalizer, you can use any valid path you'd like.

set the path:

action=public (this is just for the "back" button in the skins, so just pass any valid domain the user owns here)

path=stats where "stats" can be changed to whatever you'd like webalizer to point to.

path=awstats if awstats is enabled, then this value must be set to awstats, or the redirect/paths won't work correctly for cgi-based awstats.

Brute Force log scanner (SKINS) new

BETA - However, since it executes no blocking actions (only notifications) it will be enabled by default.

Admin Level -> Brute Force Monitor

DirectAdmin will now scan your service logs for any brute force login attempts on your system (dovecot, exim, proftpd, sshd)

Internal default directadmin.conf setting:


to disable the feature, add this to your directadmin.conf:


What this feature will do is for every run of the dataskq, the following logs are scanned for any filesize increases or reductions (reductions mean file is rotated):


Note that the internal default values of the above log paths will vary depending on OS.

If a change to any of the logs if found, the dataskq will open that log and jump to the last read-position (so we don't need to re-scan the entire file).

Notifications will be sent to all Admins on the system after an IP makes x number of attempts on any account:


or a user account received x number of attempts from any IP:


and the count is reset after x number of hours


meaning (for example) an ip can have 100 failed attempts within 24 hours before all Admins are notified.

If the IP has 99 failed attempts, waits 24 hours, then makes 99 more attempts, no notifications will be sent.

If you set clear_brute_log_time to 0, it will never be cleared.. but I don't think this is a good value to use, as it would bog down as the log grows.

Note that the start of the count is from the very first count, which can be more than 24 hours ago.

Example, if an IP makes an attempt once every 12 hours for 10 days, the Admins will be notified, because the last attempt was within the 24 hour window.

The first attempt remains until the last attempt expires.

It's important to note that DA does not scan the logs entry's "time" value... but only takes note of the time when DA noted the entry.

So the initial run of a log may trigger the notification even if the failed attempts are outside of your scanning parameters if your log spans a long period.

DA will still maintain old entries/counts even after the logs are rotated. It notices the logs are smaller than they were, then just starts from the start.

It notes the last size of a log, and will start from that point if the log has grown, to save on parsing (unless it shrunk, then it starts from the beginning, implying rotation)

The above values are internal defaults, so don't show up in the directadmin.conf until you add them (if you want to change the values).

They can be accessed from the skins so you don't need to do any manual editing:

Admin Level -> Admin Settings

There will be a set of filter definitions (multiple definitions for each service) stored in:


where you can also create a custom version here:


The /usr/local/directadmin/data/admin/brute.conf will be created if it doesn't exist, and will store the logs' last write and parse data, to prevent redundant parsing and improve efficiency.

For each filter line, there are multiple filter item definitions:


Note that the entry values will be URL encoded (really, just needed for = and & to prevent the url string from breaking).

A sample line will look like:

dovecot1=attempts_after=(auth failed,%20&attempts_until=%20attempts)&binary=dovecot&ip_after=rip%3D&ip_until=,&text=(auth failed&user_after=user%3D<&user_until=>

which would catch the log entry:

Jun 11 00:49:05 hostname dovecot: imap-login: Aborted login (auth failed, 2 attempts): user=<>, method=PLAIN, rip=, lip=, secured

The filtername will be in the format of the service name, followed by a filter number.

The value of the number (eg: dovecot1) will affect the order in which the filters are run on the log entries, which does matter in a few cases (where the text is a subset of another filter's text)

The text value is the string in the entry that will tell us if it's a filed login or not, and is a required value.

Following values are optional:


The optional binary is only for exim, where it's not a shared log.. /var/log/exim/mainlog is exclusively for exim, and exim[1234] doesn't show up after the date in the log format, so we don't scan for it.

The binary variable is used to see if the log entry in the given log matches the filter... for example, we don't care about spamd lines when looking for dovecot.

This will email all Admins upon passing a certain threshold defined in the directadmin.conf (ip_brutecount and user_brutecount).

The script /usr/local/directadmin/scripts/custom/ and scripts will be called, if they exists, allowing the server admin to take action if so desired.. such as adding the IP to the firewall to block the account.

You may not want to do active blocking until the feature has received more testing, as we don't want to block ourselves by accident if there is a bug.

The env variables will be:

value=   or  username
data=filter data, url encoded with filternames and counts.
count=123  (added as of DA 1.39.2) total count of failed attempts.

Note that does not pass an IP, so you probably won't be able to do much.. but it's there if you want to do anything like suspend that account (not sure why you would)..

Note that you'd have to use the data variable to figure out which filter name was flagged to figure out which service it applies to.. eg: ftp or mail, or system accounts, etc..

The dataskq debug level 100 will give you your log output. Also any log output on non-zero return value is saved to the errortaskq.log

The login failure count for IPs is stored here:


and each IP row will store the number of hits for a specific filter id (eg: dovecot1=3&proftpd=1), in case the IP is attackig more than one service.

Also, DA will take not of what username login is being attempted.

A 2nd data file for usernames is stored here:


in case the brute for attack comes from many different IPs in smaller chunks (5 attempts per IP, but from 200 IPs still adds up to 1000 attempts total, for when an attacker rotates their IPs frequently).

The format is the same, except a username instead of an IP.

Another item to this feature is the file:


it will contain the actual log entires there were a match to the auth fail filters.

A directadmin.conf setting:


will be present and represents the number of days this log will hold the entires.

Each entry has a time flag, so they'll be percolated out of the list file as their time expires.

This log will be used as a database for a new SKINS page for info on what each IP was doing, and which IPs were attacking a particular user account.

When an IP is removed from the brute_log_entries.list, this script is called:


and the the variable:


is passed to the script. For each clear of the log, an IP is only sent to this script once, so you don't run the script many times on the same IP (for that minute)

Note: The IPs sent to may not have been sent to you as a notification.. meaning, they may not have been called with the script.

Any IP that had even one bad connection will be called to, so ensure that your script can handle the case that the IP many not have been called with

The is for entry removal of the brute_log_entries.list, which stores all failed logins, even 1 failed attempt.

Also, it may be called in successive minutes as each entry gets cleared out, so ensure your script can handle clearing the same IP multiple times.

Note: Due to the potentially large number of records possible in the brute force log (we've tested up to 16000 entries), the sorting algorithm for that table had to be changed to a quicker method.

However, this prevents the "sub sort" for working correctly (where it's sorted twice, for matches on the first sort). Most people will never use the sub-sort anyway, but without this change, that high number of records would cause a timeout on the CMD_BRUTE_FORCE_MONITOR page.

With this change, sorting of 16000 records can be done in roughly 2 seconds, vs 60+ seconds with the other sort (unknown how much time it would have needed, as it timed-out, and that's no good regardless) .



<a href="CMD_BRUTE_FORCE_MONITOR">Brute Force Monitor</a><br>


new file, see release for html.


<td class="list top-border">
Parse service logs for brute force attacks
<td class="list top-border">
<input type=radio name=brute_force_log_scanner value="yes" |BRUTEFORCELOGSCANNER_YES|>Yes&nbsp;&nbsp;&nbsp;<input type=radio name=brute_force_log_scanner value="no" |BRUTEFORCELOGSCANNER_NO|>No
&nbsp;&nbsp;<a target=_blank href="[Brute Force log scanner (SKINS)](/changelog/version-1.39.0.html#brute-force-log-scanner-skins)">(?)</a>
&nbsp;&nbsp;<a href="CMD_BRUTE_FORCE_MONITOR">View Log</a>

<td class=list>
Notify Admins after an IP has
<td class=list>
<input type=text name=ip_brutecount value="|IP_LOGIN_FAILURE_THRESHOLD|" size=4> login failures on any account.
<td class=list>
Notify Admins after a User has
<td class=list>
<input type=text name=user_brutecount value="|USER_LOGIN_FAILURE_THRESHOLD|" size=4> login failures from any IP.
<td class=list>
Reset count of IP/User failed attempts
<td class=list>
<input type=text name=clear_brute_log_time value="|CLEAR_BRUTE_LOG_TIME|" size=4> hours after last attempt.
<td class=list>
Clear failed login attempts from log
<td class=list>
<input type=text name=clear_brute_log_entry_time value="|CLEAR_BRUTE_LOG_ENTRY_TIME|" size=4> days after entry was made.
</tr> and action=rewrite&value=ipcount new

ALPHA 0.1 - still in testing, use at your own risk.

New script to move all Users/Domains under a Reseller (including the Reseller) to a new IP.


cd /usr/local/directadmin/scripts
./ reseller

where is the old IP. It does not need to actually exist on the box (in case you managed to get yourself out of sync somehow) is the new IP Users and the Reseller are being moved to. This new IP must be set as "shared".

Thanks to Martynas (smtalk) for this contribution.

Also, since the counts will change, new task.queue entry to force immediate IP/User count.

echo "action=rewrite&value=ipcount" >> /usr/local/directadmin/data/task.queue

Note that this happens with the nightly tally already, but we're calling it right away with the above script to prevent confusion.

Brute Force IP Info Page and custm (SKINS) new

Related guide to use it, including example with a working iptables firewall:

Extension to the Brute Force Monitor (BFM) Brute Force log scanner (SKINS)

This new feature allows "dig -x" information from the given IP. Can only be called by Admins.

If /usr/bin/dig does not exist, option will not show up in the first IP table on the BFM page.

If it does, a 4th column is added called "IP Info", which you can click for a given IP, taking you to the next page, showing the dig output.

Any IP can be specified in the URL, but of course, a valid IP must be used.

New internal directadmin.conf option, set by default to:


If you create the custom script:


DA will then show you another table on the IP Info page.

This new table will simply contain one button: "Block IP"

When clicked, the custom script is excuted, and the variable:


is passed to the script.

The purpose of this script is to more easily let you take action on that IP, without needing to login to ssh (eg: to update a firewall rule).

Just be very...very.. careful if/when you do this.. as if your script has an error, you may end up blocking yourself.

All output from the script will be displayed, so you can generate whatever output you'd like.

Note that DA does check for zero and non-zer exit status's, so ensure you exit 0; if all went well, and exit 1 on error.

The script has root access, so if you want to know how that IP got into the list, scan:


for that IPs entry, and then decode the URL encoded value to get the specifics on the login failures.

It will be up to you to ensure that the IP does not get blocked twice, if blocking it twice is an issue.



see update for file.




For API version of CMD_PHP_SAFE_MODE

View all domains, default settings


Calling it with no options returns

safemode=ON|OFF default setting for new domains

open_basedir=ON|OFF default settig for new domains

these options are not valid domain names, hence we don't need to worry about conflict with this.

Also included in the list if the full safemode cache file for all domains.

Sample row:

Set a domain



enable|disable=anything           #you pass the variable you want for safemode.  Don't pass the other.  Value can be anything.


enable_obd|disable_obd=anything  #same idea, but for open basedir.

output here is url encoded result.

Set defaults



enabled=ON|OFF #default for safemode

obdenabled=ON|OFF #default for open basedir

backup_gzip=0 extracting domains directory as tar.gz fixed

Related to this new feature:

backup option for tar instead of tar.gz

The /home/user/domains directory extraction from the tar backup file is using the full xzfp value instead of xfp.


gzip the tar file to make it tar.gz, then restore with backup_gzip=1.

Awstats rename domain not swapping configs/files fixed

If the account is using awstats, and the domain is renamed, the configs and files were not updated.

Note that the index.html symbolic link is simply removed, as it will be recreated with the next run of awstats.

All html files will be renamed to have the new domain.

All .data/*.txt files will also be renamed.

The .data/*.txt files will not have their data swapped as that would be tampering with historical data, so we're not going to do that (unless there are reported issues of the wrong domain not working in these files)

Last Updated: