A sensible template for a bash script

construction destruction power steel
Photo by Pixabay on Pexels.com

Some advice for bash scripts with some simple logging and error handling…

I’ve been writing bash scripts of various sizes for ages, but the time I spend on it has certainly increased now that I expend more effort automating things. When I realised I was going to be writing a lot more bash scripts I looked around for a standard framework that was being used in my company, or in the wider world, and I found a vast array of advice but nothing I found offered a one-size-fits-all solution. Addendum: Having written this post, in my reading I’ve come across the google bash style guide. Don’t know how I missed it. It has some very sensible rules in it.

In the end, I came up with something like this:


log() { echo $(date): $* ; } #logs date and message to stdout
die() { log $* 1>&2 ; usage ; exit 1 ; } #logs error message to stderr and exits
usage() { #logs usage info to stderr
cat <<EOF 1>&2
Usage: $0 [OPTION]…
-a this option adds bunnies
log "We're going to doSomething"
doSomething || die "doSomething failed"
doSomethingElse || die "doSomethingElse failed"

view raw


hosted with ❤ by GitHub

Error handling

One of the things you notice right away when you’re automating stuff is that it can be a huge pain when it falls over. One of the first (painful) things you notice about bash as a scripting language is that it doesn’t fail fast when something goes wrong. Bash provides a feature to enable failing fast, set -e, but it is not entirely reliable, as described here.

You can also use the trap ' {code} ' ERR pattern but personally I’m not keen on enclosing my code inside quotes.

The above gist shows my preferred solution to error handling. Essentially, everything you call in the script will have a return code indicating success or failure and this can be or’ed with a simple function to print out an error and exit. It has the benefit of being explicit with regard to which commands you are happy to let fail and which should kill execution. You can also put some clean up code in the die() function if needed.
Clearly this solution is not ideal. If every line has to be followed by an || die then your scripts are going to be harder to read. Sadly, I haven’t found a better way than this that will stop in all circumstances. Clarification: I didn’t invent this, just read it somewhere, maybe here, and think it’s the neatest solution out there.


Scripts should log from time to time to let you know what they’re doing. The snippet above shows that you can use a simple function to do your logging instead of simply echoing. The reason I prefer this is that it allows you to redirect, or switch off, all of your logging in a single place. In practice, I may even have debug(), info(), warn()… functions and set a log level at the top of the script. This isn’t too much work and basically follows from what is shown in the snippet.


Usage instructions are standard across bash functions and my snippet above shows my favourite way of showing them (by calling the usage() function). The advantage of using cat and a here document instead of echo is that the formatting is the same when it’s printed out as it is in the file. My favourite thing about this is that, if you put it at the top of the document, it functions as a simple, readable piece of documentation for developers as well as users.


It isn’t obvious from the snippet above, but in bash you have to define a function before you can call it. That means that if you follow good coding practices and split your functionality into functions, the main flow of your script is going to start near the bottom.
Some people recommend putting defined functions into a separate file and sourcing it with something like
source functions.sh
at the top of your script. This is fine, I guess, but it means you’re switching between files as you try to understand your script. Worse, your neat script in a single file is now a (small) code base!
Another option is to define the functions in the same file and have the main flow defined in a main() function at the top. This main() function calls out to the other functions when it runs. The dependencies work fine so long as you call the main function after the other functions have been defined (i.e. at the bottom of the script).
I like this option but it’s odd having a call to main on its own at the bottom of the script. It may well not be all that clear to other devs and it’s very easy for it to be missed off if someone is copying and pasting, or editing, code.

In general, lately I am leaning towards using bash scripts for orchestration and simple automation. When there’s heavy lifting to be done, I’d rather call out to a python script than try to use bash for something more complicated than it is really suited to.

Unit Testing

Cards on the table, I don’t unit test my bash scripts. I can imagine having a –run-tests in your script parameters that runs a set of unit tests hidden away at the bottom of your script. To be honest though, I feel like this is yet another argument in favour of using a better language to do any heavy lifting and keeping bash for orchestration. This stackoverflow answer seems very sensible to me.

Parameters and options

I haven’t said anything about parameters and options. There are tons of sites and stackoverflow pages devoted to getopts and how to use it. I won’t repeat them here but I will say that, on several occasions, I’ve taken the getopts part and put it in a function and felt afterwards like I hadn’t made it much clearer. I lean towards popping it at the top of the script under the usage these days. It is, however, worth processing your parameters to get away from $1, $2 as quickly as possible.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s