'I can share files between Docker containers, how do I share the same files with the host

I have a number of containers running of a Pi4 as defined by a docker-compose.yml file (extract below). They collect data from a piece of hardware which records data to /home/hardware/data/ on the “hardware” container. This is successfully shared with the “celery_worker” container via the “data” named volume.

My problem is I also need the files to be shared onto the host (e.g. /tmp/hardware_data), because they need to be read by another application on another machine on my home network.

  • /tmp/hardware_data:/home/hardware/data/ does not work.

I have googled, read the Docker documentation and watched a number of youtube videos, but am no nearer a solution.

celery_worker:
    image: celery_worker:latest
    container_name: celery_worker
    volumes:
      - data:/srv/project/hardware/hardware_data/

hardware:
    image: hardware:latest
    container_name: hardware
    volumes:
      - data:/home/hardware/data/

volumes:
  #to share hardware data
  data: {}

XXXXXXXXXXXXXXXXX Update

Konrad’s advice on getting rid of the named volumes made sense and my (extract) docker-compose.yml file now looks like this:

celery_worker:
    image: celery_worker:latest
    container_name: celery_worker
    volumes:
      - /tmp/hardware_data:/srv/project/hardware/hardware_data/

hardware:
    image: hardware:latest
    container_name: hardware
    volumes:
      - /tmp/hardware_data:/home/hardware/data/

Whilst in development I am running Docker as root. I have “touched” a “test file” in the hardware container’s Dockerfile. If I run the containers with the volumes #’d out, the “test file” is created in /home/hardware/data/ on the hardware’s container with the following permissions “- r w-r - -r- - root root .

When I run docker-compose.yml with the volumes defined I get three empty directories (2 in containers 1 in host) i.e. the hardware container’s “test file” is not present in any of the directories. If I “touch” a “new test file” in any of the directories it is shared across all three with the same permissions as the original “test file”: “- r w- r - - r - - root root”.

Based on this I can't see that it is a permissions issue



Solution 1:[1]

Solution

I got some clues from here: https://forums.docker.com/t/how-to-access-docker-volume-data-from-host-machine/88063 Meyay’s answer “Only with named volumes existing data is copied from the container target folder into the volume.”

This code works. Please note: /tmp/data must pre-exist on the host, with the same permissions (and owner I think) as the docker container.

celery_worker:
    image: celery_worker:latest
    container_name: celery_worker
    volumes:
      - data:/srv/project/hardware/hardware_data/

hardware:
    image: hardware:latest
    container_name: hardware
    volumes:
      - data:/home/hardware/data/

volumes:
  data:
    driver: local
    driver_opts:
      o: bind
      type: none
      device: /tmp/data

Solution 2:[2]

You need to get rid of the named volume and use a folder on the host machine, like this:

celery_worker:
    image: celery_worker:latest
    container_name: celery_worker
    volumes:
      - /tmp/hardware_data:/srv/project/hardware/hardware_data/

hardware:
    image: hardware:latest
    container_name: hardware
    volumes:
      - /tmp/hardware_data:/home/hardware/data/

Edit:

Without any details of the application running in hardware service, the only reason I can think of why it would not write data to the bound host directory is an issue of permissions.

Here's an article about this topic.

Solution 3:[3]

I just want to share with you the complete solution that you can use to share files via the Docker network between containers in a portable way without using Volumes or installing ssh in the Docker container.

A simple bash function that exposes the file on the source machine:

# ------------------------------------------------------------------------------
# make the file available for another machine via the network
#
# this runs in the background to avoid blocking the main script execution
# ------------------------------------------------------------------------------
function exposeFile() {
    local file port
    file="$1"
    port=1384

    echo "exposing the file for another machine with..."
    echo "   file: $file"
    echo "   port: $port"

    while :
    do
        { echo -ne "HTTP/1.0 200 OK\r\n\r\n"; cat "$file" ; } | nc  -l "$port"
    done
}

The endless loop is necessary if you want to download the file more than one time because after the download Netcat exists.

Call the bash method from your main script to expose the file when you need it:

#!/bin/bash
...
exposeFile "path/to/the/file.zip" &

Then you can use the a simple wget to download the file on the source machine:

function fileDownload {
    echo "downloading the file..."

    local fileHome file
    fileHome="/download/directory/"
    file="myfile.zip"

    local remoteHost remotePort
    remoteHost="remote-host-or-ip"
    remotePort=1384

    mkdir -p "$fileHome"
    wget -O "$fileHome/$file" "$remoteHost":"$remotePort"
}

I hope that this will help you to save some time ;)

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 SgtBilko
Solution 2
Solution 3 zappee