'Elementwise multiplication of several arrays in Python Numpy
Coding some Quantum Mechanics routines, I have discovered a curious behavior of Python's NumPy. When I use NumPy's multiply with more than two arrays, I get faulty results. In the code below, i have to write:
f = np.multiply(rowH,colH)
A[row][col]=np.sum(np.multiply(f,w))
which produces the correct result. However, my initial formulation was this:
A[row][col]=np.sum(np.multiply(rowH, colH, w))
which does not produce an error message, but the wrong result. Where is my fault in thinking that I could give three arrays to numpy's multiply routine?
Here is the full code:
from numpy.polynomial.hermite import Hermite, hermgauss
import numpy as np
import matplotlib.pyplot as plt
dim = 3
x,w = hermgauss(dim)
A = np.zeros((dim, dim))
#build matrix
for row in range(0, dim):
rowH = Hermite.basis(row)(x)
for col in range(0, dim):
colH = Hermite.basis(col)(x)
#gaussian quadrature in vectorized form
f = np.multiply(rowH,colH)
A[row][col]=np.sum(np.multiply(f,w))
print(A)
::NOTE:: this code only runs with NumPy 1.7.0 and higher!
Solution 1:[1]
Your fault is in not reading the documentation:
numpy.multiply(x1, x2[, out])
multiply takes exactly two input arrays. The optional third argument is an output array which can be used to store the result. (If it isn't provided, a new array is created and returned.) When you passed three arrays, the third array was overwritten with the product of the first two.
Solution 2:[2]
For anyone stumbling upon this, the best way to apply an element-wise multiplication of n np.ndarray of shape (d, ) is to first np.vstack them and apply np.prod on the first axis:
>>> import numpy as np
>>>
>>> arrays = [
... np.array([1, 2, 3]),
... np.array([5, 8, 2]),
... np.array([9, 2, 0]),
... ]
>>>
>>> print(np.prod(np.vstack(arrays), axis=0))
[45 32 0]
Solution 3:[3]
Yes! Simply as doing * to np.arrays
import numpy as np
a=np.array([2,9,4])
b=np.array([3,4,5])
c=np.array([10,5,8])
d=a*b*c
print(d)
Produce:
[ 60 180 160]
Solution 4:[4]
I came across this question because I was interested in knowing the fastest way to multiply several arrays together. I ended up writing a benchmark, and I'm surprised by what I found.
I tested 3 methods:
- Using pure python syntax
a * b * c * d - Using np.multiply.reduce
- Using np.stack followed by np.prod(..., axis=0)
I tested these methods with multiple numbers of arrays and array sizes. I was very surprised to find that method 1 tends to be the best.
The blue, orange, and green lines correspond to methods 1, 2, and 3 respectively. The style of the line indicates how many arrays were multiplied together.
The results are surprisingly consistent. Even when you have 8 arrays, it seems faster to just use a * b * c * d * e * f * g * h. I'm not entirely sure why this is. Perhaps Python sees this expression and does a divide-and-conquer combination style, whereas reduce is completely linear?
Code for the benchmark is here:
https://github.com/Erotemic/misc/blob/main/tests/python/bench_np_reduce_vs_repeat.py
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | BrenBarn |
| Solution 2 | valentin |
| Solution 3 | Sergio Morales |
| Solution 4 | Erotemic |

