Category "multiprocessing"

Python multiprocessing with macOs

I have a mac (MacOs 10.15.4, Python ver 3.82) and need to work in multiprocessing, but on my pc the procedures doesn’t work. For example, I have copied a

Can 2 processes run on same enclave in intel sgx?

I know intel sgx supports running multiple threads on one enclave. But I'curious that whether I can use fork to run 2 processes on one enclave?

Failing to load model using multiprocessing on windows

This program works on Unix and I'm trying to transition it to windows. It uses multiprocessing and I understand it's an issue with being forced to use spawning

Failing to load model using multiprocessing on windows

This program works on Unix and I'm trying to transition it to windows. It uses multiprocessing and I understand it's an issue with being forced to use spawning

Python multiprocessing with TensorRT

I am trying to use a TensorRT engine for inference in a python class that inherits from multiprocessing. The engine works in a standalone python script on my sy

scipy.fftpack.fft with multiprocessing, how to avoid performance losses?

I would like to use scipy.fftpack.fft (and rfft) inside a multiprocessing structure, I have observed significant performances losses due to an apparent incompat

Split list automatically for multiprocessing

I am learning multiprocessing in Python, and thinking of a problem. I want that for a shared list(nums = mp.Manager().list), is there any way that it automatica

Python 3.X Multiprocessing Boost Python Failed

I'm trying to use multiprocessing to map a Boost-wrapped function over multiple cores. This works fine in python 2.7, but is failing in python 3.8. I know the o

Multiprocessing, missing 1 required positional argument: 'response'

I don't have really understood what happened. I was executing this code, a moment ago it works and then it returns an error. EDITED The code takes from euronext

Multi-Threaded Python scraper does not execute functions

I am writing a multi-threaded python scraper. I am facing an issue where my script quits after running for 0.39 seconds without any error. It seem that the pars

joblib: Worker stopped caused by timeout or memory leak

I am only using the basic joblib functionality: Parallel(n_jobs=-1)(delayed(function)(arg) for arg in arglist) I am frequently getting the warning: UserWarn

How to store the variables output inside a function during concurrent.futures.ProcessPoolExecutor from concurrent.futures

I am currently trying to store the output obtained in a function during multiprocessing by using concurrent.futures.ProcessPoolExecutor from concurrent.futures

multiprocessing.Queue fails intermittently. Bug in Python?

Python's multiprocessing.Queuefails intermittently, and I don't know why. Is this a bug in Python or my script? Minimal failing script import multiprocessing

How to wait in bash script to subprocess, if one of them failed so stop everyone

How to wait in bash script to subprocess and if one of them return exit code 1 so I want to stop all subprocess. This is what I tried to do. But there are a som

how to read from h5py in multiprocessing without errors

I have code like: def get_df(path, key): with h5py.File(path) as hdf: df = pd.DataFrame(np.array(hdf[key])) return df def f(key): df = get_

Run simultaneous process inside python class

I'm developping a game using pygame and I want to create a loading screen while the assets are loaded. The loading screen have animations, so loading screen and

A case for multiprocessing?

Say I have a function that gives me a lot of data coming from a device when called. I want to accumulate this data in a memory buffer. When the buffer reaches a

Is there a way to take advantage of multiple CPU cores with asyncio?

I've created a simple HTTP Server with python and asyncio. But, I have read that asyncio-based servers can only take advantage of one CPU core. I am trying to f

Is there any way to increase the size during memory sharing between process in PyTorch

My current code is like this: import torch import torch.multiprocessing as mp t = torch.zeros([10,10]) t.share_memory_() processes = [] for i in range(3):

Occasional deadlock in multiprocessing.Pool

I have N independent tasks that are executed in a multiprocessing.Pool of size os.cpu_count() (8 in my case), with maxtasksperchild=1 (i.e. a fresh worker proce