Checking Bash exit status of several commands efficiently

bash get exit code of command
bash exit code
bash set -e
exit code 256 linux
bash see command return code
bash get result of command
if exit code is not 0
bash execute command and check status

Is there something similar to pipefail for multiple commands, like a 'try' statement but within bash. I would like to do something like this:

echo "trying stuff"
try {

And at any point, if any command fails, drop out and echo out the error of that command. I don't want to have to do something like:

if [ $? -ne 0 ]; then
    echo "command1 borked it"

if [ $? -ne 0 ]; then
    echo "command2 borked it"

And so on... or anything like:

pipefail -o
command1 "arg1" "arg2" | command2 "arg1" "arg2" | command3

Because the arguments of each command I believe (correct me if I'm wrong) will interfere with each other. These two methods seem horribly long-winded and nasty to me so I'm here appealing for a more efficient method.

You can write a function that launches and tests the command for you. Assume command1 and command2 are environment variables that have been set to a command.

function mytest {
    local status=$?
    if [ $status -ne 0 ]; then
        echo "error with $1" >&2
    return $status

mytest $command1
mytest $command2

Checking the exit status of ANY command in a pipeline, Normally you only get the exit status of the last command in a -t command · Efficiently Pinging a Subnet · The strstr() function in bash It's a pretty common thing in a shell script to want to check the exit status of the previous command. So as soon as we use echo to tell us about the return code of grep  Here we get exit status as zero which means the “ ls ” command executed successfully. Exit codes in scripts. Sample Script 1: #!/bin/bash which vim check_vim=$? if [ $check_vim -eq 0 ]; then echo -e "Package already installed" else sudo apt-get install vim fi

What do you mean by "drop out and echo the error"? If you mean you want the script to terminate as soon as any command fails, then just do

set -e    # DON'T do this.  See commentary below.

at the start of the script (but note warning below). Do not bother echoing the error message: let the failing command handle that. In other words, if you do:


set -e    # Use caution.  eg, don't do this

and command2 fails, while printing an error message to stderr, then it seems that you have achieved what you want. (Unless I misinterpret what you want!)

As a corollary, any command that you write must behave well: it must report errors to stderr instead of stdout (the sample code in the question prints errors to stdout) and it must exit with a non-zero status when it fails.

However, I no longer consider this to be a good practice. set -e has changed its semantics with different versions of bash, and although it works fine for a simple script, there are so many edge cases that it is essentially unusable. (Consider things like: set -e; foo() { false; echo should not print; } ; foo && echo ok The semantics here are somewhat reasonable, but if you refactor code into a function that relied on the option setting to terminate early, you can easily get bitten.) IMO it is better to write:


 command1 || exit
 command2 || exit
 command3 || exit



command1 && command2 && command3

Get exit status of process that's piped to another, bash and zsh have an array variable that holds the exit status of each failed, output its status: echo "$error_statuses" | grep '1:' | cut -d: -f2 # test if all commands are all effectively transparent to the return code of the command inside them,  #!/bin/bash function check_exit { cmd_output=$($@) local status=$? echo $status if [ $status -ne 0 ]; then echo "error with $1" >&2 fi return $status } function run_command() { exit 1 } check_exit run_command

I have a set of scripting functions that I use extensively on my Red Hat system. They use the system functions from /etc/init.d/functions to print green [ OK ] and red [FAILED] status indicators.

You can optionally set the $LOG_STEPS variable to a log file name if you want to log which commands fail.

step "Installing XFS filesystem tools:"
try rpm -i xfsprogs-*.rpm

step "Configuring udev:"
try cp *.rules /etc/udev/rules.d
try udevtrigger

step "Adding rc.postsysinit hook:"
try cp rc.postsysinit /etc/rc.d/
try ln -s rc.d/rc.postsysinit /etc/rc.postsysinit
try echo $'\nexec /etc/rc.postsysinit' >> /etc/rc.sysinit
Installing XFS filesystem tools:        [  OK  ]
Configuring udev:                       [FAILED]
Adding rc.postsysinit hook:             [  OK  ]

. /etc/init.d/functions

# Use step(), try(), and next() to perform a series of commands and print
# [  OK  ] or [FAILED] at the end. The step as a whole fails if any individual
# command fails.
# Example:
#     step "Remounting / and /boot as read-write:"
#     try mount -o remount,rw /
#     try mount -o remount,rw /boot
#     next
step() {
    echo -n "$@"

    [[ -w /tmp ]] && echo $STEP_OK > /tmp/step.$$

try() {
    # Check for `-b' argument to run command in the background.
    local BG=

    [[ $1 == -b ]] && { BG=1; shift; }
    [[ $1 == -- ]] && {       shift; }

    # Run the command.
    if [[ -z $BG ]]; then
        "$@" &

    # Check if command failed and update $STEP_OK if so.
    local EXIT_CODE=$?

    if [[ $EXIT_CODE -ne 0 ]]; then
        [[ -w /tmp ]] && echo $STEP_OK > /tmp/step.$$

        if [[ -n $LOG_STEPS ]]; then
            local FILE=$(readlink -m "${BASH_SOURCE[1]}")
            local LINE=${BASH_LINENO[0]}

            echo "$FILE: line $LINE: Command \`$*' failed with exit code $EXIT_CODE." >> "$LOG_STEPS"

    return $EXIT_CODE

next() {
    [[ -f /tmp/step.$$ ]] && { STEP_OK=$(< /tmp/step.$$); rm -f /tmp/step.$$; }
    [[ $STEP_OK -eq 0 ]]  && echo_success || echo_failure

    return $STEP_OK

Bash command line exit codes demystified, Oh sure, I can script and program a little in PHP, Perl, Bash, and even PowerShell (yes, I'm too slow at writing code and trial-and-error isn't an efficient debugging strategy. The exit code of 0 means that the ls command ran without issue. Have fun checking your statuses, and now it's time for me to exit. Checking Bash exit status of several commands efficiently (9) Is there something similar to pipefail for multiple commands, like a 'try' statement but within bash. I would like to do something like this:

For what it's worth, a shorter way to write code to check each command for success is:

command1 || echo "command1 borked it"
command2 || echo "command2 borked it"

It's still tedious but at least it's readable.

Introduction to the Command Line, One way to see the exit status of a command is to use the echo command to display it: fine $ echo $? 0 $ hhhhhh bash: hhhhhh: command not found $ echo $? 127 We will start with a basic example, then improve it to make it more useful. With this knowledge we can write an efficient and compact helpme function. For example, when I run below commands, I saw different status for different situation. test$ hello -bash: hello: command not found test$ echo $? 127 test$ expr 1 / 0 expr: division by zero test$ echo $? 2 I was wondering if there is any common exit status list in system or internet where i can get all the exit status with their descriptions.

An alternative is simply to join the commands together with && so that the first one to fail prevents the remainder from executing:

command1 &&
  command2 &&

This isn't the syntax you asked for in the question, but it's a common pattern for the use case you describe. In general the commands should be responsible for printing failures so that you don't have to do so manually (maybe with a -q flag to silence errors when you don't want them). If you have the ability to modify these commands, I'd edit them to yell on failure, rather than wrap them in something else that does so.

Notice also that you don't need to do:

if [ $? -ne 0 ]; then

You can simply say:

if ! command1; then

And when you do need to check return codes use an arithmetic context instead of [ ... -ne:

# do something
if (( ret != 0 )); then

Test Constructs, An if/then construct tests whether the exit status of a list of commands is 0 (since 0 means It is a synonym for test, and a builtin for efficiency reasons. #!/bin/bash # Tip: # If you're unsure how a certain condition might evaluate, #+ test it in an  Every command inside the braces needs to be terminated with a semi-colon, even the last one. Each brace must be separated from the surrounding text by spaces on both sides. Braces don't cause word breaks in bash , so "{echo" is a single word, "{ echo" is a brace followed by the word "echo".

Exit and Exit Status, there are dark corners in the Bourne shell, and people use all of them. Every command returns an exit status (sometimes referred to as a return status or exit code). shell. # To verify this, type "echo $?" after script terminates. Certain exit status codes have reserved meanings and should not be user-specified in a script. You can use $? to find out the exit status of command. $? always expands to the status of the most recently executed foreground command or pipeline. For example, you run the command cal: $ cal

bash(1) - Linux manual page - Michael Kerrisk, Bash's exit status is the exit status of the last command executed in the script. Pipelines A pipeline is a sequence of one or more commands separated by one Using ;;& in place of ;; causes the shell to test the next pattern list in the a line continuation (that is, it is removed from the input stream and effectively ignored). Checking the exit status of ANY command in a pipeline It's a pretty common thing in a shell script to want to check the exit status of the previous command. You can do this with the $? variable, as is widely known: #!/bin/bash grep /etc/hosts if [ "$?"

Advanced Bash Scripting Guide 5.3 Volume 1, Test Constructs An if/then construct tests whether the exit status of a list of commands is 0 (since 0 means "success" by UNIX convention), and if so, executes one or more commands. It is a synonym for test, and a builtin for efficiency reasons. Checking command Succeeded. Whenever a command runs, the return value of the command is stored in a specific bash variable. For the first example, let’s run the package manager to update the system. In my case, it’s Ubuntu, so the command would be something like this. $

  • Take a look to the unofficial bash strict mode: set -euo pipefail.
  • @PabloBianchi, set -e is a horrid idea. See the exercises in BashFAQ #105 discussing just a few of the unexpected edge cases it introduces, and/or the comparison showing incompatibilities between different shells' (and shell versions') implementations at
  • Don't use $*, it'll fail if any arguments have spaces in them; use "$@" instead. Similarly, put $1 inside the quotes in the echo command.
  • Also I'd avoid the name test as that is a built-in command.
  • This is the method I went with. To be honest, I don't think I was clear enough in my original post but this method allows me to write my own 'test' function so I can then perform an error actions in there I like that are relevant to the actions performed in the script. Thanks :)
  • Wouldn't the exit code returned by test() always return 0 in case of an error since the last command executed was 'echo'. You might need to save the value of $? first.
  • This is not a good idea, and it encourages bad practice. Consider the simple case of ls. If you invoke ls foo and get an error message of the form ls: foo: No such file or directory\n you understand the problem. If instead you get ls: foo: No such file or directory\nerror with ls\n you become distracted by superfluous information. In this case, It is easy enough to argue that the superfluity is trivial, but it quickly grows. Concise error messages are important. But more importantly, this type of wrapper encourages too writers to completely omit good error messages.
  • Be advised that while this solution is the simplest, it does not let you perform any cleanup on failure.
  • Cleanup can be accomplished with traps. (eg trap some_func 0 will execute some_func at exit)
  • Also note that the semantics of errexit (set -e) have changed in different versions of bash, and will often behave unexpectedly during function invocation and other settings. I no longer recommend its use. IMO, it is better to write || exit explicitly after each command.
  • this is pure gold. While I understand how to use the script I don't fully grasp each step, definitely outside of my bash scripting knowledge but I think it's a work of art nonetheless.
  • Does this tool have a formal name? I'd love to read a man page on this style of step/try/next logging