'Unix sockets slower than tcp when connected to redis

I'm developing high performance web server that should handle ~2k simultaneous connections and 40k QPS, achieving resp time < 7ms.

What it does is querying Redis server (running on the same host) and returning the response to the client. During the testing, I observed that implementation using TCP STREAM_SOCKETs acts way better than connecting with unix sockets. With ~1500 connections TCP stays about 8ms while unix sockets come to ~50.

Server is written in C, its based on constant Posix threads pool, I use blocking connection to Redis. My OS is CentOS 6, tests were performed using Jmeter, wrk and ab. For connection with redis I use hiredis lib that provides these two ways of connecting to Redis.
As far as I know unix socket should be at least as fast as TCP.

Does somebody have any idea what could cause such a behaviour?



Solution 1:[1]

Although this is an old question, i would like to make an addition. Other answers talk about 500k or even 1.7M responses/s. This may be achievable with Redis, but the question was:

Client --#Network#--> Webserver --#Something#--> Redis

The webserver functions in sort of a HTML proxy to Redis i assume.

This means that your number of requests is also limited to how many requests the webserver can achieve. There is a limitation often forgotten: if you have a 100Mbit connection, you have 100.000.000 bits a second at your disposal, but default in packages of 1518 bits (including the required space after the package). This means: 65k network packages. Assuming all your responses are smaller that the data part of such a package and non have to be resend due to CRC errors or lost packages.

Also, if persistent connections are not used, you need to do an TCP/IP handshake for each connection This adds 3 packages per request (2 receive, one send). So in that unoptimised situation, you remain with 21k obtainable requests to your webserver. (or 210k for a 'perfect' gigabit connection) - if your response fits in one packet of 175 bytes.

So:

  • Persistent connections only require a bit of memory, so enable it. It can quadruple your performance. (best case)
  • Reduce your response size by using gzip/deflate if needed so they fit in as few of packets as possible. (Each packet lost is a possible response lost)
  • Reduce your response size by stripping unneeded 'garbage' like debug data or long xml tags.
  • HTTPS connections will add a huge (in comparisation here) overhead
  • Add network cards and trunk them
  • if responses are always smaller then 175 bytes, use a dedicted network card for this service and reduce the network frame size to increase the packages send each second.
  • don't let the server do other things (like serving normal webpages)
  • ...

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 phulstaert