'Measuring performance degradation between performance between runs

I wondered if there is a standard way in locust to detect and warn about performance degradation between runs. E.g.

Suppose I have an endpoint GET /helloworld that returns a json like:

{"message":"hello world"} on run1 of the performance test suite, the response time was 1 second, but running the locust command, now returns in 3 secs. In my contrived example, lets say 3 secs is an 'acceptable' response time, thus we don't want to fail the test. However, the performance of the endpoint has degraded and we want to warn about this. What is the best way to achieve this in locust?

I thought maybe to save the results in csv after each run and do a comparison with the csv produced by the current/last run, but is there an easier way to achieve this? Thanks!



Solution 1:[1]

Maybe locust-plugins’s —check-avg-response-time fits your need? It logs a message and changes Locust’s process exit code.

Used like this:

locust -f locustfile_that_imports_locust_plugins.py --check-avg-response-time 50

https://github.com/SvenskaSpel/locust-plugins/blob/9a4eb77ac4581871db951b454631dc6c49fa1c7a/examples/cmd_line_examples.sh#L6

Solution 2:[2]

It really depends how big is your project and how much data you need to evaluate.

We are using keptn for such purpose (https://keptn.sh/)

If you will deploy it, prepare SLI/SLO files you can integrate Locust with Keptn. Once load test is finished evaluation is triggered for given data.

In our case Jenkins build is set to Pass/Warn/Fail state depending on the evaluation outcome.

There are rules that can compare data for last X samples. Eventually, you need to gather 100 points to have Green build

On degradation you will have less points causing failure.

Score graph

Latency graph

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Cyberwiz
Solution 2 Dominik Jeziorski