'"Noise" in performance measurement
I have a large & complex shiny app which I am currently analyzing in terms of performance. I used profvis to get profiles of the app's performance and to identify possible bottlenecks. Identifying bottlenecks itself was successful since the relative amount of time spent shows a clear pattern. The problem which got me wondering is, even in exactly identical scenarios, the performance can vary very much in terms of absolute time. The same calculation can take 60 seconds on one run and 100 seconds on another run some time later. This makes it quite difficult for me to properly evaluate code changes which I try out for improving performance. And on the other hand this noise itself turned out to be a performance problem of my app, which I want to solve.
I already eliminated possible factors which could cause 'randomness' in performance inside and outside the app/code like random seeds, memory usage (gc(), rm, no other programs running), different laptops, internet connection, always using the same data and settings, etc. as far as possible.
It's worth mentioning that I am at most an advanced beginner and can easily have overlooked something. Could a highly modularized code with iterative function calls and nested functions be the source of the noise? My main question is: Are there common sources of 'noisy' performance in R/Shiny for which I should check?
Apologizing for the open, quite unspecified question. I've already gone through several performance-related articles/guides for r/shiny performance covering caching, writing faster & more stable functions etc., but couldn't really find the problem of noisy performance.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
