'Initialise Pytorch layer with local random number generator

When writing larger programs that require determinism for random processes, it is generally considered good practice to create function-specific random number generators (rngs) and pass those to the randomness-dependent functions (rather than setting a global seed and have the functions depend on this). See also here.

For example, when I have a function that generates some sample using numpy, I use a rng that I create at the beginning of the script:

# at the beginning of the script
import numpy as np
seed = 5465
rng = np.random.default_rng(seed)
# much later in the script
def generate_sample(rng, size):
  return rng.random(size)
generate_sample(rng, size=5)

I am trying to achieve the same when initialising a torch.nn.Linear layer, i.e. use a pre-defined rng to reproducibly initialise the layer.

Is this (reasonably) possible or am I forced to set the Pytorch global seed via torch.manual_seed() instead?



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source