'The node was low on resource: ephemeral-storage
All the pods of a node are on Evicted state due to "The node was low on resource: ephemeral-storage."
portal-59978bff4d-2qkgf 0/1 Evicted 0 14m
release-mgmt-74995bc7dd-nzlgq 0/1 Evicted 0 8m20s
service-orchestration-79f8dc7dc-kx6g4 0/1 Evicted 0 7m31s
test-mgmt-7f977567d6-zl7cc 0/1 Evicted 0 8m17s
anyone knows the quick fix of it.
Solution 1:[1]
Pods that use emptyDir volumes without storage quotas will fill up this storage, where the following error is present:
eviction manager: attempting to reclaim ephemeral-storage
Set a quota limits.ephemeral-storage, requests.ephemeral-storage to limit this, as otherwise any container can write any amount of storage to its node filesystem.
A sample resource quota definition
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
spec:
hard:
pods: "4"
requests.cpu: "1"
requests.memory: 1Gi
requests.ephemeral-storage: 2Gi
limits.cpu: "2"
limits.memory: 2Gi
limits.ephemeral-storage: 4Gi
Another reason for this issue can be log files eating disk space. Check this question
Solution 2:[2]
You can increase the size of the EBS volume which is attached and restart the EC2 instance to get that effect.
Solution 3:[3]
This issue happend is due to of lacking of temporary storage while processing such as application process their jobs and store temporary, cache data. To resolve this issue, you must dive into your pod, and check when the process running which device location cost your available storage by command df-h, and observe the available capacity size. You can create an pvc (with hostpath, or other ways) which has larger size and mount into pod's directory which store their temporary data.
Solution 4:[4]
Please consider following factors:
- Application that you are deploying via Kubernetes should have limits and requests set for memory and CPU in manifest file.
- As per your application requirements you should have your nodes configured in Kubernetes cluster.
- Increase no of nodes if all of them are heavily used by apps.
Solution 5:[5]
If you don't set limits.ephemeral-storage, requests.ephemeral-storage, by default pods have permission to use all node's storage space.
So, you can set limits.ephemeral-storage, requests.ephemeral-storage
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: app
image: images.my-company.example/app:v4
resources:
requests:
ephemeral-storage: "2Gi"
limits:
ephemeral-storage: "4Gi"
Or, configure the Docker logging driver to limit the amount of stored logs (in the file /etc/docker/daemon.json, by default this file doesn't exist, you must create it):
{
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "2"
}
}
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | |
| Solution 2 | Hemal Ekanayake |
| Solution 3 | Vinh Trieu |
| Solution 4 | Harsh Manvar |
| Solution 5 | quoc9x |
