'Popen in multiprocessing.Pool is extremely slow
I'm trying to run multiple ngspice simulation in parallel. So I have a function _runNgspice that executes ngspice.
import pandas as pd
from subprocess import PIPE, Popen
from multiprocessing import Pool
def _runNgspice(param):
runDir=param['runDir']
spiceFile=param['spiceFile']
logfile= os.path.join(runDir, 'run.log')
with open(logfile, 'wb') as nglog:
ngcommand=['ngspice', '-b', f'{spiceFile}']
with Popen(ngcommand, shell=False, cwd=runDir, stdout=PIPE) as proc:
nglog.write(proc.stdout.read())
result=pd.read_csv(os.path.join(runDir, 'result.csv'))
return result
To run multiple instances of _runNgspice in parallel, I do the following:
p=Pool(processes=processes)
with p as pool:
result=pool.map(_runNgspice, param)
This code works very well on my laptop (CPU: i7-1165G7, Linux kernel 5.15, python 3.9.7). However on my desktop system (CPU: i7-12700, Linux kernel 5.17, python 3.9.7) this is extremely slow. What I find interesting: if I run _runNgspice without pool.map all is fine. As soon as it is executed via pool.map it gets extremely slow. Also interesting: If I execute time python3 investigate.py I get a result like this:
real 0m21,267s
user 0m1,271s
sys 0m2,931s
There is almost no user time. Any ideas what could be the cause for the issue I'm observing?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
