Aborting a shell script on any command fails

By utilizing Bash scripts, we can have access to a powerful programming language. Writing such scripts is an efficient way to run several commands one after the other. However, by default, even if one command fails, the others will still be executed, which can lead to unexpected behavior or data corruption.

We'll learn how we can add some safeguards so that these errors don't happen. The example code has been tested in Bash and should also work for other POSIX-compatible shell environments.

The Problem

Let's first take a look at how Bash handles error messages by default. Let's say we have a simple shell script called hello.sh which prints out the word "Hello" followed by the word "World".

#!/bin/bash
echo hello
echo world

Running it gives the expected result

$ ./hello.sh
hello
world

Next, we'll add a statement that's guaranteed to fail.

#!/bin/bash
echo hello
cat non-existing-file
echo world

If executed, this script will produce an output. Execution doesn't stop, but "world" is still outputted

$ ./hello.sh
hello
cat: non-existing-file: No such file or directory
world

We also got a 0 exit code for our script which indicates everything went well, despite the error.

$ echo $?
0

Exit on First Error

If you want to make a shell script exit whenever any command within the script fails, you can use the set -e option. This option tells the shell to exit immediately if any command within the script exits with a non-zero status.

Here's an example of how you can use the set -e option in a shell script

#!/bin/bash

# Set the -e option
set -e

# Run some commands
echo "Running command 1"
command1
echo "Running command 2"
command2
echo "Running command 3"
command3

# The script will exit if any of the commands above fail

echo "All commands completed successfully"

In this example, if any of the commands command1, command2, or command3 fail (i.e., exit with a non-zero status), the script will exit immediately and the remaining commands will not be executed.

Example with set -e

Let's modify our previous example to use set -e

#!/bin/bash
set -e
echo hello
cat non-existing-file
echo world

Now when we run it, the script stops after the failed command

$ ./hello.sh
hello
cat: non-existing-file: No such file or directory
$ echo $?
1

Using Pipefail

Unfortunately, the set -e solution won't help if your script contains piped statements. In a pipeline, only the exit status of the last command is considered. For example

#!/bin/bash
set -e
cat non-existing-file | echo hello
echo world

Even though we use set -e, the script continues because the last command in the pipeline (echo hello) succeeds

$ ./hello.sh
hello
cat: non-existing-file: No such file or directory
world

To handle pipeline failures properly, we need to add -o pipefail

#!/bin/bash
set -eo pipefail
cat non-existing-file | echo hello
echo world

The pipefail option tells bash that if any command in a pipeline fails, the pipeline should return the exit code of the failed command, not just the last one.

$ ./hello.sh
hello
cat: non-existing-file: No such file or directory
$ echo $?
1

Best Practices

For robust shell scripting, always include both options at the beginning of your scripts

#!/bin/bash
set -eo pipefail

# Your script commands here

You can also combine them in a single line

#!/bin/bash
set -euo pipefail  # -u catches undefined variables too

Conclusion

Using set -e makes shell scripts exit immediately when any command fails, preventing cascading errors. Adding -o pipefail ensures that failures within pipelines are also caught. Always include set -eo pipefail at the beginning of your shell scripts for safer and more predictable execution.

Updated on: 2026-03-17T09:01:38+05:30

14K+ Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements