'netstat display Huge amount of "SYN_RCVD" state

netstat display Huge amount of "SYN_RCVD" state on Solaris 11, it's a nginx reverse server.

# netstat -na | awk '/^20/ {++S[$NF]} END {for(a in S) print a, S[a]}'
TIME_WAIT 515
ESTABLISHED 427
SYN_SENT 14
LAST_ACK 223
Connected 9488
FIN_WAIT_1 37
FIN_WAIT_2 167
CLOSING 48
CLOSE_WAIT 11
Idle 5
SYN_RCVD 4437

The server tcp/ip parameters:

# ndd -get /dev/tcp tcp_time_wait_interval
60000
# ndd -get /dev/tcp tcp_keepalive_interval
15000
# ndd -get /dev/tcp tcp_fin_wait_2_flush_interval
67500
# ndd -get /dev/tcp tcp_conn_req_max_q
16384
# ndd -get /dev/tcp tcp_conn_req_max_q0
16384
# ndd -get /dev/tcp tcp_xmit_hiwat
400000
# ndd -get /dev/tcp tcp_recv_hiwat
400000
# ndd -get /dev/tcp tcp_cwnd_max
2097152
# ndd -get /dev/tcp tcp_ip_abort_interval
20000
# ndd -get /dev/tcp tcp_rexmit_interval_initial
4000
# ndd -get /dev/tcp tcp_rexmit_interval_max
10000
# ndd -get /dev/tcp tcp_rexmit_interval_min
3000
# ndd -get /dev/tcp tcp_max_buf
4194304

How to tuning this nginx reverse server. Thanks.



Solution 1:[1]

 # of conn reqs in SYN_RCVD - "fully-established" connections - 
 those which have finished the 3-way handshake and 
 are waiting to be picked up by an "accept()" call. 

You should consider tuning your accept_mutex nginx parameter and may be try different nginx processing method, just to get if it is real reason.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1