'Monitoring system integrity by analyzing proof of work performance
So I have some code that spawns a number of processes which generate psudo-random sequences and hashes them checking if the hashes meet some criteria and then saves passing hashes, the random seed used, and the amount of time it took to generate a passing sequence from the random seed. My criteria is that the first 8 hex characters of the resulting sha256 hash are the same. I saw some strange output where my durations where roughly the same for a number of results and subsequently checked the durations by re-running the random seeds. I found that upon re-running the seeds the times were much shorter (>1000 seconds when before they were <5000 seconds). This seems like a red flag for system integrity but what todo about that is a separate question.
I want to perform a student-t test on the distribution of some n recent durations so that I can trigger a validation process that re-runs the seeds to check if their time to completion changes. What distribution should I use to test against and what's a good n for how many samples I should examine?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|