'Publishing docker swarm mode port only to localhost

I've created docker swarm with a website inside swarm, publishing port 8080 outside. I want to consume that port using Nginx running outside swarm on port 80, which will perform server name resolution and host static files.

Problem is, swarm automatically publishes port 8080 to internet using iptables, and I don't know if is it possible to allow only local nginx instance to use it? Because currently users can access site on both 80 and 8080 ports, and second one is broken (without images).

Tried playing with ufw, but it's not working. Also manually changing iptables would be a nightmare, as I would have to do it on every swarm node after every update. Any solutions?

EDIT: I can't use same network for swarm and nginx outside swarm, because overlay network is incompatible with normal, single-host containers. Theoretically I could put nginx to the swarm, but I prefer to keep it separate, on the same host that contains static files.



Solution 1:[1]

No, right now you are not able to bind a published port to an IP (even not to 127.0.0.1) or an interface (like the loopback interface lo). But there are two issues dealing with this problem:

So you could subscribe to them and/or participate in the discussion.

Further reading:

Solution 2:[2]

Yes, if the containers are in the same network you don't need to publish ports for containers to access each other.

In your case you can publish port 80 from the nginx container and not publish any ports from the website container. Nginx can still reach the website container on port 8080 as long as both containers are in the same Docker network.

Solution 3:[3]

"Temp" solution that I am using is leaning on alpine/socat image.

Idea:

  • use additional lightweight container that is running outside of swarm and use some port forwarding tool to (e.g. socat is used here)
  • add that container to the same network of the swarm service we want to expose only to localhost
  • expose this helper container at localhost:HOST_PORT:INTERNAL_PORT
  • use socat of this container to forward trafic to swarm's machine

Command:

docker run --name socat-elasticsearch -p 127.0.0.1:9200:9200 --network elasticsearch --rm -it alpine/socat tcp-listen:9200,reuseaddr,fork tcp:elasticsearch:9200

Flag -it can be removed once it can be confirmed all is working fine for you. Also add -d to run it daemonized.

Daemon command:

docker run --name socat-elasticsearch -d -p 127.0.0.1:9200:9200 --network elasticsearch --rm alpine/socat tcp-listen:9200,reuseaddr,fork tcp:elasticsearch:9200

My use case:

Sometimes I need to access ES directly, so this approach is just fine for me. Would like to see some docker's native solution, though.

P.S. Auto-restart feature of docker could be used if this needs to be up and running after host machine restart.

See restart policy docs here:

https://docs.docker.com/engine/reference/commandline/run/#restart-policies---restart

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Murmel
Solution 2 Elton Stoneman
Solution 3