'Golang API giving higher response time with increasing number of concurrent users

I am having some problems with the concurrent HTTP connection in the golang. Kindly read the whole question, and as the actual code is quite long, I am using pseudocode

In short, I have to create a single API, which will internally call 5 other APIs, unify their response, and send them as a single response. I am using goroutines to call those 5 internal APIs along with timeout, and using channels to ensure that every goroutine has been completed, then I unify their response, and return the same.

Things are going fine when I do local testing, my response time is around 300ms, which is pretty good.

The problem arises when I do the locust load testing of 200 users, then my response time go as high as 7 8 sec. I am thinking it has to do with the HTTP client waiting for the resources as we are running a high number of goroutines.

like 1 API spin up 5 go-routine, so if each of 200 users makes API requests at the rate of supposing 5 req/sec. Then a total number of goroutines goes way higher. Again this is my assumption only

p.s. normally the API I am building is pretty good in response time, I am using all the caching and stuff and any response greater than 400ms should not be the case

So can anyone please tell me how can I tackle this problem of increasing response time when number of concurrent users increases


Locust test report enter image description here


pseudo code

simple route

group.POST("/test", controller.testHandler)

controller

type Worker struct {
    NumWorker int
    Data      chan structures.Placement
}

e := Worker{
    NumWorker: 5, // Number of worker goroutine(s)
    Data:      make(chan, 5) /* Buffer Size */),
}

//call the goroutines along with the 
for i := 0; i < e.NumWorker; i++ {

    // Do some fake work
    wg.Add(1)
    go ad.GetResponses(params ,chan ,  &wg) //making HHTP call and returning the response in the channel

}

for v := range resChan {
    //unifying all the response, and return the same as our response
    switch v.Tyoe{
    case A :
        finalResponse.A = v
    case B
        finalResponse.B  = v
    } 
}

return finalResponse



Request HTTP client

//i am using a global http client with custom transport , so that i can effectively use the resources
var client *http.Client

func init() {
    tr := &http.Transport{
        MaxIdleConnsPerHost: 1024,
        TLSHandshakeTimeout: 0 * time.Second,
    }

    tr.MaxIdleConns = 100
    tr.MaxConnsPerHost = 100
    tr.MaxIdleConnsPerHost = 100

    client = &http.Client{Transport: tr, Timeout: 10 * time.Second}
}



func GetResponses(params , chan  ,wg){
    res = client.Do(req)
    chan <- res

}



Solution 1:[1]

So I have done some debugging and span monitoring , and turns out redis was the culprit in this. You can see this https://stackoverflow.com/a/70902382/9928176

To get an idea how I solved it

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 iron_man83