'how to limit container running on different node use docker stack deploy

I have three nodes in docker swarm (all nodes are manager) I want to run zookeeper cluster on these three nodes

my docker-compose file

version: '3.8'
services:
  zookeeper1:
    image: 'bitnami/zookeeper:latest'
    hostname: "zookeeper-1"
    ports:
      - '2181'
      - '2888'
      - '3888'
    volumes:
      - "zookeeper-1:/opt/bitnami/zookeeper/conf"
    environment:
      - ZOO_SERVER_ID=1
      - ZOO_SERVERS=0.0.0.0:2888:3888,zookeeper-2:2888:3888,zookeeper-3:2888:3888
      - ALLOW_ANONYMOUS_LOGIN=yes
    networks:
      - network_test
  zookeeper2:
    image: 'bitnami/zookeeper:latest'
    hostname: "zookeeper-2"
    ports:
      - '2181'
      - '2888'
      - '3888'
    volumes:
      - "zookeeper-2:/opt/bitnami/zookeeper/conf"
    environment:
      - ZOO_SERVER_ID=2
      - ZOO_SERVERS=zookeeper-1:2888:3888,0.0.0.0:2888:3888,zookeeper-3:2888:3888
      - ALLOW_ANONYMOUS_LOGIN=yes
    networks:
      - network_test
  zookeeper3:
    image: 'bitnami/zookeeper:latest'
    hostname: "zookeeper-3"
    ports:
      - '2181'
      - '2888'
      - '3888'
    volumes:
      - "zookeeper-3:/opt/bitnami/zookeeper/conf"
    environment:
      - ZOO_SERVER_ID=3
      - ZOO_SERVERS=zookeeper-1:2888:3888,zookeeper-2:2888:3888,0.0.0.0:2888:3888
      - ALLOW_ANONYMOUS_LOGIN=yes
    networks:
      - network_test

I use docker stack deploy to run, my expect is each zookeeper will run on different node, but sometimes one node will start two zookeeper conatiners

Does the docker stack deploy can have this feature??

thanks



Solution 1:[1]

To start a service on each available node in your Docker Swarm cluster you need to run it in global mode.

But, in your case because of the specific volumes for each Zookeeper you can use placement constraints to control the nodes a service can be assigned to. So you can add the following section to each Zookeeper service which will allow each instance to run on a different node:

services:
  ...
  zookeeper-1:
   ...
   deploy:
      placement:
        constraints:
          - node.hostname==node1

Solution 2:[2]

If you roll your zookeepers into a single service, then you can use max_replicas_per_node.

Like this:

version: "3.9"

volumes:
  zookeeper:
    name: '{{index .Service.Labels "com.docker.stack.namespace"}}_zookeeper-{{.Task.Slot}}'

services:
  zookeeper:
    image: zookeeper:latest
    hostname: zoo{{.Task.Slot}}
    volumes:
      - zookeeper:/conf
    environment:
      ZOO_MY_ID: '{{.Task.Slot}}'
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
      ALLOW_ANONYMOUS_LOGIN: 'yes'
    ports:
    - 2181:2181
    deploy:
      replicas: 3
      placement:
        max_replicas_per_node: 1
        constraints:
        - node.role==worker

I have used the docker offical, rather than the bitnami image for demo purposes.

Service templates are used to assign each replica a hostname of the form "zoo1"..."zoo3" so that, rather than 3 services, 1 service with 3 replicas can be used. This also means that port 2181 is published only, and dockers service mesh will loadbalance zookeeper clients to zookeeper instances automatically.

As the original question included a unique volume per service, service template parameters are again used to assign a volume name of the form "stack_zookeeper_1". However, this is a config volume and probably needs to be shared? Also, as zookeeper tasks replicas are migrated between swarm nodes, the volume will be created empty on each swarm node if the default volume driver (local) rather than a swarm aware driver is being used.

Finally, replicas and max_replicas_per_node ensure that 3 zoo tasks are started and don't share nodes.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 cvitaa11
Solution 2