'Why docker container is consuming lot of memory?

I am new to docker. I have a nodejs(sails.js) application. I have deployed it using docker. There is only one docker container running on my Ubuntu machine.

When I tried to monitor the memory usage by my docker container using "docker stats" command, below is the stats I get (as shown in image)

docker stats

My question is, why this single docker container is eating lot of memory ~207MiB? In future if I want to increase the number of containers running per host, will it consume memory in this multiples? It doesn't seem to be feasible solution if I want to run 100 container of same app on my machine. Is there any way to optimize memory consumption by docker containers?

(When I run the same application without docker (sails lift / node app.js) it only consumes 80MB of memory.)



Solution 1:[1]

I know this question is old, but I thought it was worth adding that if you are using Docker For Mac, you can navigate to Docker > Preferences > Resources > Advanced, and on that page are several options to control resource settings such as:

  • Number of CPUs
  • Memory
  • Swap
  • Disk Image Size

and other various settings. I've noticed that if I signify 2GB of memory, as long as the Docker desktop is running, it will use the entire 2GB of memory.

Solution 2:[2]

By default, any Docker Container may consume as much of the hardware such as CPU and RAM. If you are running multiple containers on the same host you should limit how much memory they can consume. As example: -m "300M" --memory-swap "1G"

The average overhead of the each docker container is 12M, and docker deamon - 130M

Solution 3:[3]

It depends on many aspects why container eats a lots of memory.

I think your case doesnt indicate that nodejs(sails.js) eats more memory in container and less outside of docker - it probably takes almost the same memory.

Container means also base container (parent container defined with FROM) sometimes with "subcontainers" with their own resources etc. that consume some memory.

You can restrict cpu or memory for docker container when starting container.

docker run --memory=1Gb --cpushares=0 name-container

Solution 4:[4]

How do you calculate the memory usage when you don't use docker? Is your application doing File I/O? If you want to scale the number of containers (docker-compose scale) to hundred, then you had better define resource limits per container. I did it by adding a line in my docker-compose.yml file under the service I wanted to limit:

mem_limit: 32m

where m stands for megabytes.

Just looking at docker stats you needn't worry since it includes unused memory which can be reclaimed. If you set a reasonable memory limit, you will see that the container reclaims memory and keeps running. If you suspect a leak, check the detailed memory stats for your container's cgroup as mentioned in the link below: https://docs.docker.com/engine/admin/runmetrics/

Please see my complete answer to my own issue at https://stackoverflow.com/a/41687155/6919159.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 DV82XL
Solution 2 Alex Muravyov
Solution 3 VladoDemcak
Solution 4 Community