'What is proper practice for performance rules testing?
I know that what we're doing is incorrect/strange practice.
We have an object that is constructed in many places in the app, and lags in its construction can severely impact our performance.
We want a gate to stop check-ins which affect this construction's performance too adversely...
So what we did was create a unit test which is basically the following:
myStopwatch.StartNew()
newMyObject = New myObject()
myStopwatch.Stop()
Assert(myStopwatch.ElapsedMilliseconds < 100)
Or: Fail if construction takes longer than 100ms
This "works" in the sense that check-ins will not commit if they impact this performance too negatively... However it's inherently a bad unit test because it can fail intermittently... If, for example, our build-server happens to be slow for whatever reason.
In response to some of the answers; we explicitly want our gates to reject check-ins that impact this performance, we don't want to check logs or watch for trends in data.
What is the correct way to meter performance in our check-in gate?
Solution 1:[1]
First I would say: Can't you allow some of that logic be lazily run rather than executing all of it in the constructor / initialization? Or can you partition the object? An useful metric for this is LCOM4.
Secondly, can you cache those instances? In a previous project we had a similar situation, and we decided to cache the object for a few minutes. This brought some other smaller issues, but the performance of the app skyrocketed.
And last, I do think it's a good approach, but I would take an average, rathen than just one sample (the OS might just at that time decide to run something else and it might take more than 100ms). Also, one issue with this approach is, if you update your hardware and forget to update this, you might add even more logic, without realizing.
I think a better approach, but more a bit more tricky to implement, is to store how long it takes to run N iterations, and if that value increases more than X% you fail the build. The benefit of this, is that since you store how long it takes, you can generate a graph from it and see the trend.
Solution 2:[2]
I don't think that you should really do this in such a way as to block check ins because it is too much work to be done during the check in process. Check ins need to be fast because your developers can do nothing else whilst they run.
This unit test would have to compile and run whilst the developer sits and waits for it. As you pointed out, one iteration of the test is not good enough to produce consistant results. How many times would it need to be run to be reliable? 10? A run of 10 iterations would increase the check in time by up to 1 second and still isn't reliable enough in my opinion. If you increased that to 100 iterations you'd get a better result but that's adding 10 seconds to the check in time.
Also, what happens if two developers check in code at the same time? Does the second one have to wait for the first test to complete before theirs starts or would the tests be run simultaneously? The first scenario is bad because the second developer has to wait twice as long. The second scenario is bad as you'd be likely to fail both tests.
I think that a better option would have the unit test be run after the check in has completed and, if it fails, have it communicate this to somebody. You could have the test run after each check in but that still has the potential for two people to check in at the same time. I think that it would be better to run the test every N minutes. That way you'd be able to track it down fairly quickly.
You could do it so that it blocks check ins but you'd have to make sure that it only runs when that object (or a dependancy) changes so that don't slow down every commit. You'd also have to make sure that the test isn't run more than once at a time.
As to the specific test, I don't think that you can get away with anything other than run the test through a number of iterations to get a more accurate result. I wouldn't like to rely on anything less than a 5 or 10 second test (so 50 to 100 iterations).
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Augusto |
| Solution 2 | Steve Kaye |
