'.glusterfs folder is too big whereas my regular data is smaller
I'm using glusterfs 7.8 across 3 nodes. Recently we are removed bunch of data which is takes approximately 170 GB from glusterfs volume. Now our regular data sits at 1.5 GB but there is a folder named .glusterfs which haves data 177.9 Gb in the glusterfs volume. And we are running out of disk space. What is it and how can i clean it?
$ gluster volume status vlys_vol:
Status of volume: vlys_vol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.1.42:/mnt/kubernetes/vlys_vol 49152 0 Y 1919
Brick 192.168.1.10:/mnt/kubernetes/vlys_vol 49152 0 Y 6702
Brick 192.168.1.37:/mnt/kubernetes/vlys_vol 49152 0 Y 1054
Self-heal Daemon on localhost N/A N/A Y 1126
Self-heal Daemon on 192.168.1.10 N/A N/A Y 6714
Self-heal Daemon on ubuntu-vm1 N/A N/A Y 2021
Task Status of Volume vlys_vol
------------------------------------------------------------------------------
There are no active volume tasks
$ gluster volume status vlys_vol detail:
Status of volume: vlys_vol
------------------------------------------------------------------------------
Brick : Brick 192.168.1.42:/mnt/kubernetes/vlys_vol
TCP Port : 49152
RDMA Port : 0
Online : Y
Pid : 1919
File System : ext4
Device : /dev/vda2
Mount Options : rw,relatime,data=ordered
Inode Size : 256
Disk Space Free : 193.1GB
Total Disk Space : 617.6GB
Inode Count : 41156608
Free Inodes : 30755114
------------------------------------------------------------------------------
Brick : Brick 192.168.1.10:/mnt/kubernetes/vlys_vol
TCP Port : 49152
RDMA Port : 0
Online : Y
Pid : 6702
File System : ext4
Device : /dev/nvme0n1p2
Mount Options : rw,relatime,errors=remount-ro
Inode Size : 256
Disk Space Free : 220.7GB
Total Disk Space : 937.4GB
Inode Count : 62480384
Free Inodes : 58114459
------------------------------------------------------------------------------
Brick : Brick 192.168.1.37:/mnt/kubernetes/vlys_vol
TCP Port : 49152
RDMA Port : 0
Online : Y
Pid : 1054
File System : ext4
Device : /dev/vda2
Mount Options : rw,relatime
Inode Size : 256
Disk Space Free : 1.5TB
Total Disk Space : 2.0TB
Inode Count : 134217728
Free Inodes : 109614197
$ gluster peer status:
Number of Peers: 2
Hostname: ubuntu-vm1
Uuid: a4bc6a92-0505-4cbc-8811-e3c69714519b
State: Peer in Cluster (Connected)
Hostname: 192.168.1.37
Uuid: 01568855-feca-4e30-8ef3-8d626e0c8e6d
State: Peer in Cluster (Connected)
Here is the ncdu results:
--- /mnt/kubernetes/vlys_vol ------------------------------------------------------------
177,9 GiB [##########] /.glusterfs
1,5 GiB [ ] /vlys-test
706,8 MiB [ ] /vlys
40,0 KiB [ ] /data-redis-cluster-1
32,0 KiB [ ] /data-redis-cluster-4
32,0 KiB [ ] /data-redis-cluster-3
32,0 KiB [ ] /data-redis-cluster-2
32,0 KiB [ ] /data-redis-cluster-0
32,0 KiB [ ] /data-redis-cluster-5
24,0 KiB [ ] /vlys-test-sts-default
24,0 KiB [ ] /vlys-test-sts
e 8,0 KiB [ ] /data-redis-cluster-test-5
e 8,0 KiB [ ] /data-redis-cluster-test-4
e 8,0 KiB [ ] /data-redis-cluster-test-3
e 8,0 KiB [ ] /data-redis-cluster-test-2
e 8,0 KiB [ ] /data-redis-cluster-test-1
e 8,0 KiB [ ] /data-redis-cluster-test-0
--- /mnt/kubernetes/vlys_vol/.glusterfs -------------------------------------------------
/..
6,2 GiB [##########] /57
5,3 GiB [######## ] /22
4,6 GiB [####### ] /cd
4,5 GiB [####### ] /97
4,5 GiB [####### ] /e8
4,3 GiB [###### ] /a2
4,3 GiB [###### ] /c1
4,0 GiB [###### ] /88
3,9 GiB [###### ] /07
3,8 GiB [###### ] /66
3,7 GiB [##### ] /48
3,6 GiB [##### ] /8e
3,4 GiB [##### ] /15
3,4 GiB [##### ] /ee
3,3 GiB [##### ] /26
2,9 GiB [#### ] /aa
2,9 GiB [#### ] /52
2,9 GiB [#### ] /f3
(much more folders like this)
2,9 MiB [ ] /09
2,9 MiB [ ] /71
2,9 MiB [ ] /a4
2,8 MiB [ ] /4a
32,0 KiB [ ] /indices
e 20,0 KiB [ ] /unlink
12,0 KiB [ ] /changelogs
e 4,0 KiB [ ] /landfill
4,0 KiB [ ] health_check
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
