Search K
Appearance
Appearance
Released: 2019-08-27
You can now create a one-time use URL which will automatically log you in as the specified User.
NOTE: Must have Login Keys enabled for the given User in their user.conf.
For example, to send someone a login URL to be logged in as admin, type:
/usr/local/directadmin/directadmin --create-login-url user=adminwhich will output something similar to:
URL: http://1.2.3.4:2222/CMD_LOGIN_URL?hash=cJbIk9GNsXk43....xmAHSTaKFiFe
where the hash value is a randomly long length from 120 to 148 characters.
/usr/local/directadmin/data/admin/login_hashes.confbut it will be crypted as the "left-side" index, with details about this hash on the right-side.
Thus, a lookup of a given hash must cycle through each item, testing the crypt until found.
It does create a Login Key under this User, but insetad of a crypted key, it saved "key=hash", signifying that it's a login url, standing by.
The password for this key is never seen.
The original login hash is removed from the global login_hashes file.
Cookie is sent, and the login works just like any other Login Key.
By Default the Login Hashes live for 3 days, including the end of the Login Key time.
So you have up to 3 days to login and logout. (it's not extended upon hash-to-key conversion)
You can set a different time by adding:
expiry=1d
for example, to the --create-login-url options list.
Valid time units are:
s,m,h,d,M,y
and ARE case sensitive.
IPs: You can list one or many IPs or 1.2.3.4-7 ranges by adding this to the options:
ips=1.2.3.4,5.6.7.8-9Similar to the Login Keys, you can control which CMDs are allowed or denied by doing something like:
deny=ALL_RESELLER,CMD_LOGIN_KEYS,CMD_API_LOGIN_KEYSwhich would block all Admin Level functions for this URL hash.
Just be careful, if you block ALL_ADMIN as it's difficult NOT to make Admin Level calls, for some things like ajax counts, etc.
When logged in with a login hash, upon clicking "Logout" (CMD_LOGOUT), it will destroy the session but will also delete this Login Key, so it doesn't hang around after.
It should get delete eventually, after the expiry hits during various cleanup operations.
You can also get json output by calling it with json=yes:
/usr/local/directadmin/directadmin --create-login-url user=admin json=yeswhich would output something like:
{
"allow_htm": "yes",
"clear_key": "yes",
"expiry_timestamp": "1566096767",
"hash": "QTyjeGyhIDpZLit4....abZ2UJCczm1U",
"keyname": "HASHURLvicJDn5L",
"max_uses": "0",
"url": "http://1.2.3.4:2222/CMD_LOGIN_URL?hash=QTyjeGyhIDpZLit4....abZ2UJCczm1U"
}New directadmin command line options let you suspend/unsuspend accounts from ssh as root:
Suspend account (any account type):
/usr/local/directadmin/directadmin --suspend-user user=fred
Unsuspend account:
/usr/local/directadmin/directadmin --unsuspend-user user=fred
Suspend domain:
/usr/local/directadmin/directadmin --suspend-domain domain=domain.com
Unsuspend domain:
/usr/local/directadmin/directadmin --unsuspend-domain domain=domain.com
UPDATE: Please follow this guide to enable this feature:
==================================
NOTE: RoundCube 1.3.10 requires version 0.2 of the direct_login module.
If you're having issues, please:
./build update
./build roudcubeIssue report:
https://github.com/alexjeen/Roundcube-AutoLogin/issues/9
New feature, found on the E-Mail Accounts page of the User Level, where the "Login" column will show extra characters (arrow and letter) to signify that the one-click login method is enabled.
By default it's disabled (for now), with the internal default being:
one_click_webmail_login=0to enable it, run:
cd /usr/local/directadmin
./directadmin set one_click_webmail_login 1
service directadmin restart
cd custombuild
./build update
./build dovecot_conf
./build exim_conf
./build roundcubeRequires CustomBuild build script at least rev 2148.
==========================
FUNCTIONALITY
CMD_WEBMAIL_LOGIN
email=fred@domain.comfred:$1$5jhn5mhn$q8oyAqlkAYXd7KJlSRqlY.::::::created=1566594254with only the created timestamp as additional information, as the is only used as the dovecot passdb, not the user info (which still uses the passwd file)
/var/www/html/roundcube/direct_login/tokens/TOKENHASH
Where the TOKENHASH file contains the encoded email,password,client IP, and creation time.
<host>/roundcube/direct_login/index.phpwith token=TOKENHASH
There is a 10 second window from the time in the token or the token is denied.
The TOKENHASH file should be delete regardless if it worked or not.
Only the IP used to create the token through DA/2222 is allowed to use this token.
The direct_login code in RC only allows tokens to live for 10 seconds.
The tally check cleans them up if they're more than an hour old (only runs once a day, so they could sit a while if the request didn't go through)
passwd_alt entries are cleaned up if they're older than 16 hours old.
Any changes/additions to a passwd_alt by DA would do the check to clear them out sooner, but the task.queue would still clean them up daily if they're more than 16 hours old at the time of the check.
This means your login is at most 16 hours. Should be plenty.
==========================
TECHNICAL
http://files1.directadmin.com/services/all/auto_login/
You shouldn't need to know this, but RoundCube, Dovecot, and the exim.pl do require changes for this to work.
CustomBuild will be able to do this for you, just run:
./build update
./build dovecot_conf
./build roundcubeafter enabling the directadmin.conf setting.
Very rough (not for anyone to use, just as a reference)
wget -O /etc/dovecot/conf/alternate_passwd.conf http://files1-new.directadmin.com/services/all/auto_login/dovecot/alternate_passwd.conf
cd /var/www/html/roundcube
wget http://files1-new.directadmin.com/services/all/auto_login/roundcube/roundcube_direct_login-0.1.tar.gz
tar xvzf roundcube_direct_login-0.1.tar.gz
chown -R webapps:webapps direct_login
chmod 711 direct_login
chmod 700 direct_login/tokens/etc/exim.pl needs to be version 28 or higher, to parse the passwd_alt files.
==========================
SKINS
user/email/pop.html
added JS+form to be triggered by the login column:
|*if HAVE_ONE_CLICK_WEBMAIL_LOGIN="yes"|
<script type="text/javascript">
<!-- // start preload code
function webmail_login(email)
{
document.getElementById("webmail_email").value = email;
document.getElementById("webmail_form").submit();
}
// done with preload code -->
</script>
<form id='webmail_form' action='CMD_WEBMAIL_LOGIN' method='POST'>
<input id='webmail_email' type='hidden' name='email' value=''>
</form>
|*endif|==========================
JSON
You'll see this token:
"HAVE_ONE_CLICK_WEBMAIL_LOGIN": "yes",which can be used to POST the above form to CMD_WEBMAIL_LOGIN as needed.
If you need to reset every User-created zone back to the default (same as the "Reset Defaults" button when viewing a zone, you can use the following task.queue command to do it:
cd /usr/local/directadmin
echo "action=reset&value=all_zones" >> data/task.queue; ./dataskq d2000
Note, it only resets domains which are listed in /etc/virtual/domainowners
It does not reset other zones.
It's highly recommended you backup all of your zones prior to the reset, in case it does not give you the desired result.
The old default was to restore with the NS records from the backups.
This change will have new installs default to the local NS records, overriding the zones.
Existing installs should not be affected as long as you've already done a backup/restore.
If unsure, please check your backup settings to ensure they're correct:
Admin Level -> Admin Backup/Transfers -> Backup/Restore Settings
and/or:
Reseller Level -> Manage User Backup -> Backup/Restore Settings
Evolution: "Restore with local NameServers."
Enhanced: "Restore with Local NameServers. (unchecked: Use NS values from backup)"
The letsencrypt_multidomain_cert=2 was the previous default.
It would list all domains and pointers under the User.
Listing other domains doesn't make much sense unless you're on an owned IP, and this is the main domain.
New internal default:
letsencrypt_multidomain_cert=3
But the need for owned IPs these days is low, with the use of SNI everywhere.
When creating a request to generate a CSR, you can now include:
include_key=yes
in the POST request for request=yes, and in addition to the API output generating the "request=", it wll also include the "key=".
Relates to this feature:
dnssec_add_subdomain_ds_to_parent to check Multi-Server Setup zones
dnssec_add_subdomain_ds_to_remote_parent=1Where should have:
domain.com on box A
sub.domain.com on box B
when signing sub.domain.com on B, the DS records get added to domain.com's zone on A
This is all fine, except when A and B are in two-way clustering.
A -> B
B -> A
The issue was that before deciding to push the DS records to the parent zone, DA figures out if that zone is local or not.
Because A had pushed it's copy of domain.com over to B, the lookup on B of "is the parent local" returns true, thus the remote push is not attempted.
The local write then fails as domain.com on B is a raw dnssec file not meant to be read by DA, stored on B in /var/named/domain.com.db (rather than domain.com.db.signed on A)
The fix is to simply add another check when doing the "is this domain local" lookup, so also exclude any zone that is dnssec signed (the larger domain.com.db zone on B with more DNSSEC data).
So "is parent domain local and not the dnssec from some other server", which now returns "false" for domain.com on B, thus DA goes to the cluster to push the DS records into domain.com on box A.
T18678
The "Reset Owner" and "Recursively" buttons in the Enhanced FileManager were using GET requests, but needed POST.
Using GET produced:
"The requested command requires POST but GET was used"
Solution is to have a form, submitted via javascript when given path is clicked.
==============
SKINS
enhanced/user/filemanager/main.html
added:
<form id='reset_owner_form' action='/CMD_FILE_MANAGER' method='POST'>
<input id='reset_path' type='hidden' name='path' value=''>
<input type='hidden' name='action' value='resetowner'>
<input id='reset_method' type='hidden' name='method' value=''>
</form>
<script type="text/javascript">
function reset_owner(f, recursive)
{
if (recursive == "1")
{
document.getElementById("reset_method").value = "recursive";
}
document.getElementById("reset_path").value = f;
document.getElementById("reset_owner_form").submit();
}
</script>As the hrefs in the table will now call:
reset_owner('/full/path/file.txt', '1');where '1' for recursive and '0' for just that file.
Filenames that contains a \ character generated invalid json in the related request:
CMD_FILE_MANAGER?action=json_all
json_encoded the indexes solved it.
Compile time: Aug 24 2019 at 17:05:29
T19040
During a fresh DirectAdmin install, if there is no named.conf, DirectAdmin will install one for you.
The provided named.conf from us has always included allow-transfer { none; }; This is a good thing.
However, if you have a named.conf before DirectAdmin is installed, it would previously use the default allow-transfer setting, whatever it might have been.
This change is to check to see if there is no trace of allow-transfer anywhere in your named.conf (or included options files), and if there are zero traces of that key-word, it will be added.
Both new installs an existing installs will check and update the named config options{} section for this, and lock it down if it's not present.
If you already have the setting, have the setting set to do something else (allow other IPs), etc, then no change will be done.
If you DO want axfr transfers enabled, then add any variation of the desired allow-transfer setting to the options{} area of your named.conf, and future updates won't bother you again.
Credit:
Francisco @ https://buyvm.net/
The /etc/logrotate.d/php-fpm file was using signal USR2 to reload the logs, but that's a full reload of the entire php-fpmXX daemon.
We only need a re-open the logs, so switch to USR1.
Only affects new installs.
If you would like the lighter reload for existing installs (if you use php-fpm), grab a fresh logrotate file:
wget -O /etc/logrotate.d/php-fpm http://files1-new.directadmin.com/services/custombuild/php-fpm.logrotate.1.3
Related to this:
You can now set multiple values in the access_control_allow_origin setting, eg:
access_control_allow_origin=access_control_allow_origin=http://www.domain.com, https://www.otherdomain.com:8080using a comma separated list.
for example.
The Access-Control-Allow-Origin header will now only be shown if:
There is an incoming Origin header from the client
That header exact matches on of the items in the list, including port, etc.. (extract string match)
The comma separated entries are trimmed, so whitespace before/after is ok.
A root level issue was discovered that allows local Users to perform a certain action at a very specific time, skipping over a check that DirectAdmin had just performed.
This could cause root files to be overwritten, resulting in damage to the system.
This cannot be performed by an external attacker, and we have no known reports of any compromised systems. We strongly suggest updating as a means of neutralizing the threat.
To prevent promoting this exploit and attracting attacks, further details will be released at a later date, to allow enough time for upgrades to complete.
Credit:
Bartosz Kwitniewski
If the mysql.db table has a db column with length 64, the actual max db name length is 63 because the _ character for this column is escaped to \_ where \_ is actually stored in that column.
Also, if you allow underscores in your DB names (which we highly discourage), this is included in the size count for each underscore present. (please try and move away from using this option)
MySQL seems to allow over-length database names, but the column used to display them is limited, so this needed to be capped to prevent the broken case.
The result would be the number of users shows -1 when there are in fact 2 (system account + db_name)
CMD_DB?domain=domain.com&json=yes
will now include:
"MAX_DB_LENGTH": "63",
which is directly based on the mysql.db.db table/row size less 1, since all DB names have the _ character, thus \_
You'd need to subtract the username length+1 from this value to know the true length allowed.
T19260