'How is redirecting output to `/dev/null` implemented in shells to result in detaching of background processes?

Context: Backup software executing hook scripts

I have a use-case in which a backup software is able to execute shell scripts as hooks before, after etc. doing some backup. This si done to optionally setup an environment in which it is safe to take backups, e.g. by creating file system level snapshots, mounting those and read the backups from the mount. Therefore that backup software by default waits until the hook script finishes.

I've implemented a slightly different use-case by executing additional background processes within that hook script to read data from within some VMs using SSH and outputs the read content into named pipes. Writing to named pipes block until a reader is attached and vice-versa, so the started background processes really need to execute concurrently with the backup software being the reader of the pipes. For this to happen, the main hook script needed to finish, so that the waiting backup software could continue.

Problem: Wrong redirection results in hook-zombie

This didn't work as expected: The backup software waited forever for the hook script to finish, because I had some redirection wrong. While I executed commands in the background and those commands for themself worked as expected, the redirection of the commands output was done by the hook script itself. That made that hook script at some point left around as a zombie never finishing at all, so the backup software couldn't continue.

(
  trap '' HUP INT
  ${cmd_exec} <<< "${cmd}"
) < '/dev/null' > "${PATH_LOCAL_MNT2}/${db_name}" 2> '/dev/null' &

vs.

(
  trap '' HUP INT
  ${cmd_exec} <<< "${cmd}" > "${PATH_LOCAL_MNT2}/${db_name}"
) < '/dev/null' > '/dev/null' 2> '/dev/null' &

I think I understand what happens now: Someone simply needs to actually read and write data as part of some redirection. At least if shell level redirection is used compared to e.g. commands writing to files on their own. I'm using SSH which doesn't seem to be able to do so and seems to entirely rely on shell level redirection.

With the second code above I'm creating a subshell being executed in the background, that subshell is redirecting the output of SSH into some file and the parent shell detaches from all the input and output of the created subshell. That detaching is actually what makes the hook script NOT becoming a zombie, it simply doesn't need to handle any redirection anymore.

Question

Here's what I don't understand currently: Redirecting to a file resulted in a zombie shell instance taking care of reading SSH output and writing it to the file. From my understanding, there's really some process actively reading the SSH output and writing it to the file, isn't it? If so, why doesn't the same process need to actively read data and write it to /dev/null? That would result in a zombie staying around as well, but that's not the case obviously. Instead, redirecting to /dev/null is documented everywhere to detach from the associated channel like STDIN, STDOUT etc.

Why is that? Why and how is redirecting to /dev/null handled specially or am I understanding the whole redirection process still wrong and there's not really the shell instance actively reading and writing data on it's own?

Thanks!



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source