'Out of memory: kill process

Does anyone have any ideas on why these logs are giving these errors - Out of memory: kill process ... nginx invoked oom-killer?

Lately, our cms has been going down and we have to manually restart the server in AWS and we're not sure what is happening to cause this behavior. log errors

Here are the exact lines of code that repeated 33 times while the server was down:

Out of memory: kill process 15654 (ruby) score 338490 or a child
Killed process 15654 (ruby) vsz:1353960kB, anon-rss:210704kB, file-rss:0kB
nginx invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0
nginx cpuset=/ mems_allowed=0
Pid: 8729, comm: nginx Tainted: G        W   2.6.35.14-97.44.amzn1.x86_64 #1
Call Trace:
[<ffffffff8108e638>] ? cpuset_print_task_mems_allowed+0x98/0xa0
[<ffffffff810bb157>] dump_header.clone.1+0x77/0x1a0
[<ffffffff81318d49>] ? _raw_spin_unlock_irqrestore+0x19/0x20
[<ffffffff811ab3af>] ? ___ratelimit+0x9f/0x120
[<ffffffff810bb2f6>] oom_kill_process.clone.0+0x76/0x140
[<ffffffff810bb4d8>] __out_of_memory+0x118/0x190
[<ffffffff810bb5d2>] out_of_memory+0x82/0x1c0
[<ffffffff810beb89>] __alloc_pages_nodemask+0x689/0x6a0
[<ffffffff810e7864>] alloc_pages_current+0x94/0xf0
[<ffffffff810b87ef>] __page_cache_alloc+0x7f/0x90
[<ffffffff810c15e0>] __do_page_cache_readahead+0xc0/0x200
[<ffffffff810c173c>] ra_submit+0x1c/0x20
[<ffffffff810b9f63>] filemap_fault+0x3e3/0x430
[<ffffffff810d023f>] __do_fault+0x4f/0x4b0
[<ffffffff810d2774>] handle_mm_fault+0x1b4/0xb40
[<ffffffff81007682>] ? check_events+0x12/0x20
[<ffffffff81006f1d>] ? xen_force_evtchn_callback+0xd/0x10
[<ffffffff81007682>] ? check_events+0x12/0x20
[<ffffffff8131c752>] do_page_fault+0x112/0x310
[<ffffffff813194b5>] page_fault+0x25/0x30
Mem-Info:
Node 0 DMA per-cpu:
CPU    0: hi:    0, btch:   1 usd:   0
CPU    1: hi:    0, btch:   1 usd:   0
CPU    2: hi:    0, btch:   1 usd:   0
CPU    3: hi:    0, btch:   1 usd:   0
Node 0 DMA32 per-cpu:
CPU    0: hi:  186, btch:  31 usd:  35
CPU    1: hi:  186, btch:  31 usd:   0
CPU    2: hi:  186, btch:  31 usd:  30
CPU    3: hi:  186, btch:  31 usd:   0
Node 0 Normal per-cpu:
CPU    0: hi:  186, btch:  31 usd: 202
CPU    1: hi:  186, btch:  31 usd:  30
CPU    2: hi:  186, btch:  31 usd:  59
CPU    3: hi:  186, btch:  31 usd: 140
active_anon:3438873 inactive_anon:284496 isolated_anon:0
active_file:0 inactive_file:62 isolated_file:64
unevictable:0 dirty:0 writeback:0 unstable:0
free:16763 slab_reclaimable:1340 slab_unreclaimable:2956
mapped:29 shmem:12 pagetables:11130 bounce:0
Node 0 DMA free:7892kB min:16kB low:20kB high:24kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15772kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
lowmem_reserve[]: 0 4024 14836 14836
Node 0 DMA32 free:47464kB min:4224kB low:5280kB high:6336kB active_anon:3848564kB inactive_anon:147080kB active_file:0kB inactive_file:8kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:4120800kB mlocked:0kB dirty:0kB writeback:0kB mapped:60kB shmem:0kB slab_reclaimable:28kB slab_unreclaimable:268kB kernel_stack:48kB pagetables:8604kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:82 all_unreclaimable? yes
lowmem_reserve[]: 0 0 10811 10811
Node 0 Normal free:11184kB min:11352kB low:14188kB high:17028kB active_anon:9906928kB inactive_anon:990904kB active_file:0kB inactive_file:1116kB unevictable:0kB isolated(anon):0kB isolated(file):256kB present:11071436kB mlocked:0kB dirty:0kB writeback:0kB mapped:56kB shmem:48kB slab_reclaimable:5332kB slab_unreclaimable:11556kB kernel_stack:2400kB pagetables:35916kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:32 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
Node 0 DMA: 3*4kB 1*8kB 2*16kB 1*32kB 0*64kB 3*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 1*4096kB = 7892kB
Node 0 DMA32: 62*4kB 104*8kB 53*16kB 29*32kB 27*64kB 7*128kB 2*256kB 3*512kB 1*1024kB 3*2048kB 8*4096kB = 47464kB
Node 0 Normal: 963*4kB 0*8kB 5*16kB 4*32kB 7*64kB 5*128kB 6*256kB 1*512kB 2*1024kB 1*2048kB 0*4096kB = 11292kB
318 total pagecache pages
0 pages in swap cache
Swap cache stats: add 0, delete 0, find 0/0
Free swap  = 0kB
Total swap = 0kB
3854801 pages RAM
86406 pages reserved
14574 pages shared
3738264 pages non-shared


Solution 1:[1]

It's happening because your server is running out of memory. To solve this problem you have 2 options.

  1. Update your Server's Ram or use SWAP (But upgrading Physical ram is recommended instead of using SWAP)

  2. Limit Nginx ram use.

To limit nginx ram use open the /etc/nginx/nginx.conf file and add client_max_body_size <your_value_here> under the http configuration block. For example:

worker_processes 1;
http {
    client_max_body_size 10M;
    ...
}

Note: use M for MB, G for GB and T for TB

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 udoyhasan