Bash help
How to find all files of certain type
If you're trying to find all files with a specific extension, you can use the "find" command to do this quickly and efficiently.
For example, to find all .php
files under the /home
directory, type:
cd /home
find . -name "*.php"
How to find files modified within X days
If you need to find files that have been modified within a certain number of days (e.g., files newer than X days), you can use the find command. In this example, we'll list all files under home/admin
which have been modified within the last 2 days:
cd /home/admin
find -mtime -2
This might be useful if you're looking for any recent changes in the account.
Similarly, you can list files which are older than a certain time by using + instead of - with the number of days. For example, any files older than a year can be listed with:
find -mtime +356
Other useful find commands: http://www.mysysad.com/2007/07/using-common-unix-find-command_07.html
How to create a large file quickly
If you need to quickly create a large file on your disk, you can use the fallocate command.
Sample usage for a 10Gig file:
fallocate -l 10G ten_gig.file
which creates a file named ten_gig.file that is 10GB in size.
How to set all directories to 755 and files to 644
If you need a quick way to reset your public_html
data to 755 for directories and 644 for files, then you can use something like this:
cd /home/user/domains/domain.com/public_html
find . -type d -exec chmod 0755 {} \;
find . -type f -exec chmod 0644 {} \;
Additionally, if you know that PHP runs as the user and not as "apache", then you can set PHP files to 600 for an extra level of security like so:
find . -type f -name '*.php' -exec chmod 600 {} \;
How to get the sum of numbers using awk
For those apt with Unix commands, often times, you'll need to add up many numbers given to you in the shell.
The quick and easy way to do this, is to use awk.
For this example, we'll assume you've already got your list of numbers (one per line) in a file called numbers.txt. The command would then be:
cat numbers.txt | awk '{ sum+=$1} END {print sum}'
This method is handy when, for example, you're trying to get the sum of the number of occurrences of a string, such as an IP, over many logs.
How to see the N-th line in a file
If you want to extract the 58th line from the file.txt, you can use both head and tail to grab it, e.g.:
head -n 58 file.txt | tail -n 1
How to extract part of a file from M to N lines
Let's say you have a really large text file (a MySQL dump, for instance), and you need to extract a part of it. You need to identify the starting and ending line numbers from which to extract. You likely do not want to use the vi editor to do so as it is better to use less with the -N modifier to display line numbers:
less -N file.txt
Define the starting and ending line number you need (let's say 220 - 390). Next, export the content to another file using the sed tool and the > redirection operator:
sed -n '220,390p' file.txt > file_exported.txt
How to find specific code in all files of a certain type
If you're looking to find all files of a specific type that contain a specific piece of code, you can do it by combining the find command with a grep of the text you're looking for.
For example, to find a specific string, let's call it asdfg, in all php files under the /home
directory, use a script like this:
#!/bin/sh
#avoid special characters like quotes and brackets
STRING=asdfg
for i in `find . -name "*.php"`; do
{
C=`grep -c --max-count=1 "$STRING" $i`
if [ "$C" -gt 0 ]; then
echo $i;
#perl regex here, on $i, if needed
fi
};
done;
exit 0;
If you need to get rid of the mentioned string, a sample Perl regex would be:
perl -pi -e 's/asdfg//' $i
where it would find all instances of asdfg and replace it with "" (nothing), thus removing it from the files.
Note that a regex becomes more complicated if you need to replace special characters.
Using Google, search how to use regular expressions for string swapping for more information.
Always test your perl regex on a test file to ensure it works before doing anything en masse to ensure you don't break anything.
How to make write-protected file
Permissions can only go so far to prevent a file from being written to.
There are some cases where you just want a file to be locked, preventing anything else from changing it. Most file systems offer tools to do just that.
Let's use a filename example /etc/exim.conf
.
Linux
For CentOS and Debian, the most common command to lock a file is chattr. To lock a file, type:
chattr +i /etc/exim.conf
To unlock it, use:
chattr -i /etc/exim.conf
To check if a file is locked or not, type:
lsattr /etc/exim.conf
The "i" in this case refers to the word "immutable", meaning to be unchangeable over time. You'd see the 'i' in the list of other flags (usually just many ---- characters) if a file is locked.
How to find the location of largest disk usage if the disk is full
- Before starting, confirm which partitions you have and which are full:
df -h
If you have many partitions, e.g., for /home
, /var``,
/usr`, etc., then you can narrow down the search by only searching the affected partition to make the process go much quicker.
- We recommend installing a simple tool that works fast and visualizes space consumption nicely.
The ncdu tool is not included in default packages, so you'll have to install it first:
yum install ncdu
Next, launch it to display disk usage in the desired location, usually /
:
ncdu /
If you know that the largest are /home/ and /var/lib/mysql and want to exclude them from counting to speed up the process, run:
ncdu / --exclude /home --exclude /var/lib/mysql
You can browse the directories in the provided output with up/down/enter keys to review in detail.
- The manual method to hunt down files is to use the "du" command and sort the output.
Note, this process can be slow, so be patient.
Start with suspect paths, but if you've already narrowed it down to a partition, start there, e.g.:
cd /var
du | sort -n
After /var, I would check /usr :
cd /usr
du | sort -n
Following that, check /home
, and /etc
, in the same manner.
- If you're not having any luck, you can try searching the '/' partition.
This will count all files on the box, so if your disk is slow/spins, then this may take a very long time.
cd /
du | sort -n
- If you want to search all of /, but you don't want other partitions (/var or /home, etc.,) to be included (assuming those are actually separate partitions, as found in #0), then you can add the -x option to stay on this partition, and don't traverse the other sub-partitions:
cd /
du -x | sort -n
For all of the above du commands with the sort option, the largest paths will show up at the bottom of the output.
You can reverse the order so that the largest is shown at the top, listed in descending order, with the -r flag:
cd /
du -x | sort -rn
The way this works, is that all of the files have to be found and loaded first. Once loaded, the sort starts up.
No output will be displayed until the full list is loaded, so be very patient. You can Ctrl-C and try a sub-path if you want to be more specific, which should run more quickly.
How to run a file as a User, called by root
Say you're creating a script where you need to run something as the User, or you're trying to debug a User cron, but the User doesn't have SSH access. You can use "su" to execute commands as that User.
Let's say that we have the User fred and we want to run the command /usr/bin/id
.
You'd run the following as root:
/bin/su -l -s /bin/sh -c "/usr/bin/id" fred
Note that. if you use any special characters like "quotes" in the command, they must be escaped with the \ character.
How to delete an oversized directory: rm: Argument list too long
If you're trying to delete files inside a directory and the following command is not working:
# /bin/rm -rf *
/bin/rm: Argument list too long.
Try this command from within the target directory instead:
find . -type f -delete
The find command is much quicker at listing files from a directory, and newer versions of "find" have a -delete option built in, which will allow you to remove files very quickly.
Another solution to delete all files in the current directory, reportedly even faster than "find", is to use Perl:
perl -e 'for(<*>){((stat)[9]<(unlink))}'
I don't believe that Perl distinguishes between a file and a directory, so if you have sub-directories, it will probably throw some errors. However, it should, in theory, remove the files anyway.
How to use regex matched on a fixed string
Many times we need to swap strings or look for a string, but in web hosting, very often, these string matches contain periods/dots.
Perl and grep are powerful in that they have special characters for matching, but unfortunately dots are wildcards, so any domain you're trying to match typically has to be escaped by adding a \ character before the dot. When working with a large number of matches or when scripting, it's easier to simply construct Perl or grep commands to match a fixed string so that no characters are treated as "special".
Let's create a test.txt file with the lines:
server.domain.com
serverAdomainBcom
server1domain2com
grep
When you run the following, you get 3 results that may not be what you want:
# grep server.domain.com test.txt.orig
server.domain.com
serverAdomain.com
server1domain2com
So, we can add the -F option so that it matches the exact string:
# grep -F server.domain.com test.txt.orig
server.domain.com
Perl
With a Perl regex, it doesn't have the command line option. It does, however, have the Q and E options, which mean "everything after Q and before E should not be escaped", thus making life much easier for doing an exact match.
If you want to swap server.domain.com with something.else.com, it should be:
# perl -pi -e 's/^\Qserver.domain.com\E\$/something.else.com/' test.txt; cat test.txt
something.else.com
serverAdomain.com
server1domain2com
We use the ^ to match the start of the line, and $ to match the end of the line for this example.
But if your value was within a line (other values before and after), you wouldn't use them.
How to recreate the /home/ directory if it was deleted
Generally, this isn't the best thing to have happen because all of the data is stored there. You'll need to recreate all of the directory structures as well as a few files required for DA to run:
- Create the DA tmp directory so you can log into DA again:
mkdir -p /home/tmp
chmod 1777 /home/tmp
- To create the /home/username directories and subdirectories, create the
/home/make_dirs.sh
file end insert code:
#!/bin/sh
for i in `ls /usr/local/directadmin/data/users`; do
{
for d in `cat /usr/local/directadmin/data/users/${i}/domains.list`; do
{
mkdir -p /home/${i}/domains/${d}/public_html/cgi-bin
cd /home/${i}/domains/${d}/
ln -s public_html private_html
mkdir -p /home/${i}/domains/${d}/public_ftp
mkdir -p /home/${i}/domains/${d}/stats
mkdir -p /home/${i}/domains/${d}/logs
};
done;
mkdir -p /home/${i}/backups
chown -R $i:$i /home/${i}
chmod -R 755 /home/${i}
};
done;
exit 0;
- Change to the /home/ directory, make the file executable, and run it:
cd /home/
chmod 755 make_dirs.sh
./make_dirs.sh
How to recreate the /var/log/ directory if it was deleted
If you've accidentally removed your /var/log directory, or it was removed by someone else, you can recreate it with all permissions like so:
mkdir -m 755 /var/log
cd /var/log
mkdir -m 700 directadmin httpd
mkdir -m 755 exim proftpd httpd/domains
chown diradmin:diradmin directadmin
chown mail:mail exim
And restart services:
service rsyslog restart
service httpd restart
service directadmin restart
service exim restart
service proftpd restart
How to patch a file
What's a patch? Say you've got a file, let's use the /etc/exim.conf as an example, and you need to make changes to that file many times over.
This may be the case if you have many servers, and you want all of them to have the same changes made.
The best way to make the same changes to the exim.conf, but to also allow the default exim.conf to have differences in other areas, is to use a patch.
Creating a patch
- To create a patch, first copy your original:
cd /etc
cp exim.conf exim.conf.orig
- Now manually make all the changes you want to your exim.conf:
nano exim.conf
- Create the patch:
diff -u exim.conf.orig exim.conf > exim.conf.patch
You've now got a patch file which can be applied to the original exim.conf for other systems, or if you re-install, etc.. Save it to a location on your website so it can be downloaded to other servers.
Applying your patch
You've got a patch file, and your default exim.conf, and you want your changes to be applied.
- Save the patch beside the file:
cd /etc/
wget http://your.server.com/exim.conf.patch
- Apply the patch:
patch -p0 < exim.conf.patch
and you're done.
Your patched exim.conf should now have all of the changes that were manually done to the original.