'Is there any way to increase the size during memory sharing between process in PyTorch

My current code is like this:

import torch
import torch.multiprocessing as mp

t = torch.zeros([10,10])
t.share_memory_()

processes = []

for i in range(3):
    p = mp.Process(target=some_function, args=(i, ))
    p.start()
    processes.append(p)

for p in processes:
    p.join()

I will increase t size in process[0], and hope that process[1] and process[2] could also receive this update. But I found that if just simply assign a new tensor:

# in process 0
temp = torch.zeros([20,10])
t = temp 
t.share_memory_()

The size of t in process 1 and process 2 are still the old 10x10.

I think it is some kind of memory and pointer issues, is there any possible way to solve it?



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source