'How do I use multiprocessing to speed up training of a neural network
I created a simple MLP neural network to create a model for the MNIST dataset, but I found it takes a while (~30 min) to loop through all 60k images, but that is using only one process. I want to use multiprocessing to speed up training by using multiple processes.
import numpy as np
import multiprocessing as mp
num_images = 60000
weights = np.random.rand(3)
x = np.array([0.01, 0.01, 0.01])
images = [i for i in range(num_images)]
def train(images):
global weights
for i in images
weights = weights + i * x
This is a simple example because I wanted to simplify to figure out the problem, but I just want to use multiprocessing to split up the images (represented by integers for simplification) into equal-sized smaller lists and each process to apply the train function simultaneously, but they also have to be able to change the global variable weights.
I can't use multiprocessing.Value() because it requires a simple object format like int or float, and I can't use numpy arrays.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
