'How pytorch uses an implicit for loop instead of an explicit for loop?
The code is as follows:
import torch
import numpy as np
x = torch.zeros((128, 3, 32, 32))
y = np.arange(128)
for i in range(len(y)):
x[i].uniform_(-y[i], y[i])
Can the for loop be replaced with something like array slicing? Thanks
Solution 1:[1]
You could take advantage of the following fact:
Given a random variable Z ~ Uniform(0,1) and real numbers a and b where a < b, if X = Z * (b - a) + a then X ~ Uniform(a,b). Particularly relevant is the case where a = -b which allows us to simplify to X = (2 * Z - 1) * b.
Putting this into practice in PyTorch looks something like this
z = torch.rand(128, 3, 32, 32)
y = torch.arange(128).reshape(-1, 1, 1, 1)
x = (2 * z - 1) * y
where torch.rand returns a new random tensor sampled from Uniform(0,1). We also had to insert some extra unitary dimensions to y using Tensor.reshape to make it shape [128,1,1,1] so that broadcasting will work the way we want it to (see here if you're unfamiliar with broadcast semantics in NumPy and PyTorch, they both use the same rules). Also note that we use torch.arange instead of np.arange since we generally want to stay in the PyTorch framework when performing mathematical operations on PyTorch tensors (because PyTorch tensors don't usually play nice with NumPy arrays).
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 |
