Programming with BASH

Rob

Administrator
Staff member
Joined
Oct 27, 2011
Messages
1,207
Reaction score
2,239
Credits
3,467
Programming with BASH

Thought it might sound like it, BASH isn't one of those captions that pop up (along with Ooff! and Biff!) when Batman and Robin are fighting the bad guys on the old 60's TV show. BASH actually stands for Bourne Again Shell. The reason for the name goes back to Steve Bourne who wrote the original Bourne Shell for Unix. When the GNU created a Free Software equivalent, they named it after Steve's shell and added a little pun on his last name.

If you're a system administrator, making BASH scripts is going to be one of those mandatory things. But far from being a chore, you'll learn that it's going to make your work and your life a whole lot easier.

Our First BASH Script

The first thing that a BASH script needs is the proverbial 'shebang'. These are two characters:

Code:
#!

Following this, you should include the path to the BASH interpreter. So the first line of your script should look like this:

Code:
#!/bin/bash

If your default shell is BASH, the line:

Code:
#!/bin/sh

does the same thing. It's just a symbolic link to /bin/bash. But if your default shell isn't BASH, you won't be invoking /bin/bash if your write a shell script with that first line. Since on Linux systems, BASH is normally the default shell, you'll see most BASH scripts start with

Code:
#!/bin/sh

From there on, you're free to do what the shell allows. Shell scripts created for administration purposes (which are the majority of scripts) are made up of lines that invoke other commands. Let's look at a simple example. Let's say you have email users on your system but you don't have a quota in place. Still, you want to monitor the sizes of the mailboxes to make sure that people aren't taking up more space than they should. This script, run from crontab, would do the trick nicely:

Except for the shebang, comment lines start with #


Code:
#!/bin/sh
 
# show us the size of email spools email spools
# date in YYYY-MM-DD format
today=`date +%Y-%m-%0e`;
# subject and recipient variables
subject="Mailcheck";
sendto="[email protected]";
cd /var/spool/mail
ls -lSh | awk '{print $5, $9}' | grep "(G|M)" | mail -s $subject-$today $sendto
# end script


First off, you'll see that we've declared some variables. These variables are not prefixed by any characters when declared but they are prefixed by the dollar sign ($) when used. You've also noticed that variables can be other commands, as in this example with the date command. When you use a command as a variable, it must be put inside backticks (` `).

First, the script changes to the directory where the mail spools are located. The the scripts performs an 'ls' with options and presents a list where the biggest spools are displayed first with their size in human readable format. This is piped to awk, which sorts out the size and the user name. The awk output is grepped for those spools which are in the Megabytes and Gigabytes. This is then piped to the 'mail' command and sent to the admin account with the subject plus the date we declared in those variables. The admin will then have a nice sorted list of who is using the most space in /var/spool/mail.

Built in Variables

Though we created our own variables in the previous example, BASH also comes with what are known as built invariables. Here is an example of a script with frequently used built in variables.


Code:
#!/bin/sh
 
echo "You are user $UID on $HOSTNAME"
echo "Your home directory is: $HOME"
echo "$HOSTNAME is running $OSTYPE"


The output of this script should yield something similar to this:


You are user 500 on penguin.linux.ork
Your home directory is: /home/mike
penguin.linux.ork is running linux-gnu


As you can see, we didn't have to previously declare any of these. That's why they're known as built-in variables. Their use will save you a lot of time in writing your scripts. You can find a complete list of built-in variables in theGNU BASH Reference Manual

Interactive Scripts

Though we mentioned that the main use of BASH scripts is for automating administrative tasks, there may be times when you need users to interact with scripts. If you want a user to input information, you need to use the variableread. Let's take a look at the following example:


Code:
#!/bin/sh
 
echo -n "Enter the name of a city: "
read CITY
echo -n "$CITY is "
case $CITY in
London | Paris | Berlin | Rome) echo -n "in Europe";;
'New York' | Chicago | Washington) echo -n "in The USA";;
Tokyo | Bejing | Bangalore) echo -n "in Asia";;
*) echo -n "some place - but I don't know where";;
esac


As you can see, we've declared a variable that's going to depend on what the user types in when he/she is prompted for the name of a city. After, we have several options for each case. If the user types in the name of a city we've contemplated here, he/she will be given a message as to where the city is. If not, the script will display a message that it doesn't know where the city is. Any answer is represented by the asterisk (*)

Making Sure You Have What You Need

If you have to manipulate the contents of a file, it's a good idea to check if this file exists first. Here is a simple BASH routine to do this using the if command:


Code:
#!/bin/sh
 
if test -f /var/log/mail.log; then
printf "The file existsn";
fi


This is a good idea, as it would render your script useless if you had it set to manipulate a file that didn't exist.

If Loops: A Practical Example

I was a full-time English as a foreign language teacher for 12 years, so I can't resist giving you this example of a multiple-choice test using a BASH script.

Code:
#!/bin/sh
 
PS3="Choose the number of the correct word to fill in the blank: "
echo "The emergency brake let go and car rolled ______ the hill"
select SENT1 in up down along beside
do
if [ "$SENT1" == "" ]; then
echo -e "You need to enter somethingn"
continue
elif [ "$SENT1" != down ]; then
echo -e "Sorry. Incorrectn"
echo "1. Incorrect" >> eoiexam.dat
elif [ "$SENT1" == down ]; then
echo -e "Great!n"
echo "No. 1 - Correct" >> eoiexam.dat
break
fi
done


The script makes use of the 'elif' routine to sort out answers that aren't correct. You will also notice that it writes the results, whether correct or not, to a file.

If you're in the teaching profession, you could expand on this to give your students a quick quiz.
 


Hi, I need to know how to extract text from a log or txt file with a bash.
for example, looking for some words into the file and grab the information after :

text : 0
I am looking for text
and I will get 0

I hope I can explain correctly.
Thank you,
 
Hi Rob,

This is what I need.

I was using tail -n 20 extract.log | grep "word", but yours is better.

Thank you very much.
 
Hello, I would like to know if it is possible to write in a log file from script without having to put all the time >> filenema in each command.

I have a draft bash file to create the one I need with lots calls to the same log file.

Code:
#!/bin/bash

extract_log=/home/joserodriguezan/extract.log
home=/home/joserodriguezan/
test=/home/joserodriguezan/test

#si el archivo existe, renombralo
if [ -f $extract_log ] ; then
        echo "el archivo $extract_log exite, se renombra"
        new_name=extract.old
        mv $extract_log  $home$new_name
  else
  echo  "archivo creado"
  touch $extract_log
fi

# si la carpeta no existe creala
if [ -d $test ] ; then
        echo "el archivo $test exite"    
  else
  echo  "carpeta creada"
  mkdir $test
fi

echo -e '\n' >> $extract_log
echo "Hostname: " hostname >> $extract_log

echo -e '\n' >> $extract_log
echo "Monit" >> $extract_log
echo "-------" >> $extract_log
if [ -d /etc/monit ] ; then
        echo "Monit: Si" >> $extract_log
        echo Versión monit  >> $extract_log      
  else
  echo  "Monit no estaba instalado"
  apt-get install monit
fi

monit --version >> $extract_log
echo -e '\n' >> $extract_log
cat filename | grep "text" | awk '{print $3} >> $extract_log

echo -e '\n' >> $extract_log
echo "Crontab" >> $extract_log
echo "-------" >> $extract_log
crontab -l >> $extract_log
 
The "first" time you write a line to a new file, you can use a "single re-direct".

echo "This is the first line" > myfile.txt

However if you want to write a seond line to the same file, you have to use the "double re-direct".

echo "This is the second line" >> myfile.txt

If you use a single > again, it will just over write the first line in the file.

echo "This just deleted my first line" > myfile.txt
 
Hello, I would like to know if it is possible to write in a log file from script without having to put all the time >> filenema in each command.

I have a draft bash file to create the one I need with lots calls to the same log file.

Code:
#!/bin/bash

extract_log=/home/joserodriguezan/extract.log
home=/home/joserodriguezan/
test=/home/joserodriguezan/test

#si el archivo existe, renombralo
if [ -f $extract_log ] ; then
        echo "el archivo $extract_log exite, se renombra"
        new_name=extract.old
        mv $extract_log  $home$new_name
  else
  echo  "archivo creado"
  touch $extract_log
fi

# si la carpeta no existe creala
if [ -d $test ] ; then
        echo "el archivo $test exite"  
  else
  echo  "carpeta creada"
  mkdir $test
fi

echo -e '\n' >> $extract_log
echo "Hostname: " hostname >> $extract_log

echo -e '\n' >> $extract_log
echo "Monit" >> $extract_log
echo "-------" >> $extract_log
if [ -d /etc/monit ] ; then
        echo "Monit: Si" >> $extract_log
        echo Versión monit  >> $extract_log    
  else
  echo  "Monit no estaba instalado"
  apt-get install monit
fi

monit --version >> $extract_log
echo -e '\n' >> $extract_log
cat filename | grep "text" | awk '{print $3} >> $extract_log

echo -e '\n' >> $extract_log
echo "Crontab" >> $extract_log
echo "-------" >> $extract_log
crontab -l >> $extract_log

Hi,

You can redirect and append STDOUT to a file using this example:
Code:
exec 1>> /tmp/stuff.log
Notes:
  • In most cases, this example will also create a file if it doesn't already exist.
  • You do not need to specify 1>>, because 1>> is implied by just using >>.

I would suggest a couple of changes to your script:
Code:
home="/home/joserodriguezan"
extract_log="${home}/extract.log"
test="${home}/test"
exec >> "${extract_log}"

If you'd like to redirect and append STDOUT and STDERR to your logfile, you can use these lines:
Code:
exec >> "${extract_log}"
exec 2>> "${extract_log}"

You can read more about I/O redirection at the following link:
http://www.tldp.org/LDP/abs/html/io-redirection.html
 
  • Like
Reactions: Rob
Hello, I would like to know if it is possible to write in a log file from script without having to put all the time >> filenema in each command.

I have a draft bash file to create the one I need with lots calls to the same log file.

Code:
#!/bin/bash

extract_log=/home/joserodriguezan/extract.log
home=/home/joserodriguezan/
test=/home/joserodriguezan/test

#si el archivo existe, renombralo
if [ -f $extract_log ] ; then
        echo "el archivo $extract_log exite, se renombra"
        new_name=extract.old
        mv $extract_log  $home$new_name
  else
  echo  "archivo creado"
  touch $extract_log
fi

# si la carpeta no existe creala
if [ -d $test ] ; then
        echo "el archivo $test exite"   
  else
  echo  "carpeta creada"
  mkdir $test
fi

echo -e '\n' >> $extract_log
echo "Hostname: " hostname >> $extract_log

echo -e '\n' >> $extract_log
echo "Monit" >> $extract_log
echo "-------" >> $extract_log
if [ -d /etc/monit ] ; then
        echo "Monit: Si" >> $extract_log
        echo Versión monit  >> $extract_log     
  else
  echo  "Monit no estaba instalado"
  apt-get install monit
fi

monit --version >> $extract_log
echo -e '\n' >> $extract_log
cat filename | grep "text" | awk '{print $3}' >> $extract_log

echo -e '\n' >> $extract_log
echo "Crontab" >> $extract_log
echo "-------" >> $extract_log
crontab -l >> $extract_log
 
cat filename | grep "text" | awk '{print $3} >> $extract_log
now is correct:
cat filename | grep "text" | awk '{print $3}' >> $extract_log
 
#!/bin/sh
does the same thing. It's just a symbolic link to /bin/bash. But if your default shell isn't BASH, you won't be invoking /bin/bash if your write a shell script with that first line. Since on Linux systems, BASH is normally the default shell, you'll see most BASH scripts start with
#!/bin/sh
Isn't that bad practice? If you're writing actual bash scripts, then would it not be better to use #!/bin/bash? I would have thought that only if you intended to write POSIX compliant shell scripts that don't use any of the bash extension should you use #!/bin/sh. Not everyone's default shell is bash, and I believe that is even more true if the script will ever be run on other Unix/Unix-like OSes such as Mac OS or any of the BSDs.
 
Last edited:
Exactly. #!/bin/bash is better, the Internet agrees :) I did notice the recommendation to use #!/usr/bin/env bash which I agree is even better - but how far is it reasonable to go to support non-standard locations of a shell?

Edit: I've just realised my FreeBSD VM has bash at /usr/local/bin/bash, so I suppose maybe there is a case for the env approach!

It's worth noting that Robs article is pretty old.

It is true that typically nowadays /bin/sh is a hard-link, or a soft-link to bash. But on some systems it is still possible that it could be pointing to another bourne-compatible shell, like dash, or ash, or something else.

So if you write a shellscript and specify #!/bin/sh as the interpreter in the shebang-line, you should really only use generic, POSIX commands that are universal to all bourne-like shells and not use shell-extensions that are specific to bash, or zsh, or any other shell.
So if you use #!/bin/sh and you stick to the bourne/POSIX standard, your scripts are pretty much guaranteed to run on any other POSIX compliant, bourne-compatible shell.

Personally I write my shellscripts specifically for bash and use a lot of the bash-specific extensions. So I always explicitly declare bash as the interpreter for my shellscripts.
Also, in order to allow my bash scripts to be used on several different systems - where bash may be in different places - I use #!/usr/bin/env bash in the shebang.
 
Last edited:
Hello, I would like to know if it is possible to write in a log file from script without having to put all the time >> filenema in each command.

I have a draft bash file to create the one I need with lots calls to the same log file.

Code:
#!/bin/bash

extract_log=/home/joserodriguezan/extract.log
home=/home/joserodriguezan/
test=/home/joserodriguezan/test

#si el archivo existe, renombralo
if [ -f $extract_log ] ; then
        echo "el archivo $extract_log exite, se renombra"
        new_name=extract.old
        mv $extract_log  $home$new_name
  else
  echo  "archivo creado"
  touch $extract_log
fi

# si la carpeta no existe creala
if [ -d $test ] ; then
        echo "el archivo $test exite"  
  else
  echo  "carpeta creada"
  mkdir $test
fi

echo -e '\n' >> $extract_log
echo "Hostname: " hostname >> $extract_log

echo -e '\n' >> $extract_log
echo "Monit" >> $extract_log
echo "-------" >> $extract_log
if [ -d /etc/monit ] ; then
        echo "Monit: Si" >> $extract_log
        echo Versión monit  >> $extract_log    
  else
  echo  "Monit no estaba instalado"
  apt-get install monit
fi

monit --version >> $extract_log
echo -e '\n' >> $extract_log
cat filename | grep "text" | awk '{print $3} >> $extract_log

echo -e '\n' >> $extract_log
echo "Crontab" >> $extract_log
echo "-------" >> $extract_log
crontab -l >> $extract_log
Hi! About redirecting the output to a single file, you can group several commands and redirect all using parenthesis or curly braces:
Code:
#-- this launches a child bash process to run the commands:
(
   echo "This is the no commented lines in monit configuration:"
   echo
   grep -v "^#" /etc/monit/monit.conf
) > used_lines_in_monit_1.conf

#-- this will run the commands within the current bash instance:
{
   echo "This is the no commented lines in monit configuration:"
   echo
   grep -v "^#" /etc/monit/monit.conf
} > used_lines_in_monit_2.conf
 
So.. if you have a file that looks like:

text : 0

is that 'text' string always in the beginning of the line? If so you could cat the file, then print the 3rd column..

cat filename | grep "text" | awk '{print

Exactly. #!/bin/bash is better, the Internet agrees :) I did notice the recommendation to use #!/usr/bin/env bash which I agree is even better - but how far is it reasonable to go to support non-standard locations of a shell?

Edit: I've just realised my FreeBSD VM has bash at /usr/local/bin/bash, so I suppose maybe there is a case for the env approach!
Indeed, because if you do you #!/bin/sh on debian, the shell will be dash in stead of bash, and there are some minor differences, so better to use #!bin/bash if you're using or intend to use the bash shell.
 


Top