'What do you do when one minio node has a full disk?

I have a cluster of 8 minio nodes. Each node has a 1.6 - 1.9 TB drive in it. Node 6 has less then a megabyte of free space while the rest have around 200GB to 1 TB of free space. Is there any way to trigger a rebalancing of the used resources?

Minio is run through docker, here is the stack deff:

version: '3.7'
services:
  minio1:
    image: minio/minio:RELEASE.2021-03-17T02-33-02Z
    network_mode: "host"
    hostname: minio1
    volumes:
      - /opt/iqdata:/data
    ports:
      - "9001:9000"
    command: server http://minio{1...8}/data
    secrets:
      - secret_key
      - access_key
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3
    deploy:
      replicas: 1
      restart_policy:
        delay: 10s
      placement:
          constraints:
            - node.labels.minio1==true
            
  minio2:
    image: minio/minio:RELEASE.2021-03-17T02-33-02Z
    network_mode: "host"
    hostname: minio2
    volumes:
      - /opt/iqdata:/data
    ports:
      - "9002:9000"
    command: server http://minio{1...8}/data
    secrets:
      - secret_key
      - access_key
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3
    deploy:
      replicas: 1
      restart_policy:
        delay: 10s
      placement:
          constraints:
            - node.labels.minio2==true  

  minio3:
    image: minio/minio:RELEASE.2021-03-17T02-33-02Z
    network_mode: "host"
    hostname: minio3
    volumes:
      - /opt/iqdata:/data
    ports:
      - "9003:9000"
    command: server http://minio{1...8}/data
    secrets:
      - secret_key
      - access_key
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3
    deploy:
      replicas: 1
      restart_policy:
        delay: 10s
      placement:
          constraints:
            - node.labels.minio3==true  

  minio4:
    image: minio/minio:RELEASE.2021-03-17T02-33-02Z
    hostname: minio4
    volumes:
      - /opt/iqdata:/data
    ports:
      - "9004:9000"
    command: server http://minio{1...8}/data
    secrets:
      - secret_key
      - access_key
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3      
    deploy:
      replicas: 1
      restart_policy:
        delay: 10s
      placement:
          constraints:
            - node.labels.minio4==true  

  minio5:
    image: minio/minio:RELEASE.2021-03-17T02-33-02Z
    network_mode: "host"
    hostname: minio5
    volumes:
      - /opt/iqdata:/data
    ports:
      - "9005:9000"
    command: server http://minio{1...8}/data
    secrets:
      - secret_key
      - access_key
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3      
    deploy:
      replicas: 1
      restart_policy:
        delay: 10s
      placement:
          constraints:
            - node.labels.minio5==true  

  minio6:
    image: minio/minio:RELEASE.2021-03-17T02-33-02Z
    network_mode: "host"
    hostname: minio6
    volumes:
      - /opt/iqdata:/data
    ports:
      - "9006:9000"
    command: server http://minio{1...8}/data
    secrets:
      - secret_key
      - access_key
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3      
    deploy:
      replicas: 1
      restart_policy:
        delay: 10s
      placement:
          constraints:
            - node.labels.minio6==true  

  minio7:
    image: minio/minio:RELEASE.2021-03-17T02-33-02Z
    network_mode: "host"
    hostname: minio7
    volumes:
      - /opt/iqdata:/data
    ports:
      - "9007:9000"
    command: server http://minio{1...8}/data
    secrets:
      - secret_key
      - access_key
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3      
    deploy:
      replicas: 1
      restart_policy:
        delay: 10s
      placement:
          constraints:
            - node.labels.minio7==true  

  minio8:
    image: minio/minio:RELEASE.2021-03-17T02-33-02Z
    network_mode: "host"
    hostname: minio8
    volumes:
      - /opt/iqdata:/data
    ports:
      - "9008:9000"
    command: server http://minio{1...8}/data
    secrets:
      - secret_key
      - access_key
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3      
    deploy:
      replicas: 1
      restart_policy:
        delay: 10s
      placement:
          constraints:
            - node.labels.minio8==true  

secrets:
  secret_key:
    external: true
  access_key:
    external: true


Solution 1:[1]

MinIO does not perform any rebalancing. It supports expanding a cluster (by adding a new pool of disks) if you are running out of disk space - https://docs.min.io/docs/distributed-minio-quickstart-guide.html.

However your situation seems to be an anomaly - likely a bug in the application (perhaps creating too many versions of the same object) or perhaps a bug in MinIO - it should not happen. Without more information, it's hard to determine the problem.

Please know you can find us at https://slack.min.io/ 24/7/365. If you have commercial questions, please reach out to us on [email protected] or on our Ask an Expert Chat functionality at https://min.io/pricing?action=talk-to-us.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 donatello