'How to multiply the list of two arrays according to their index position?

I defined a function named 'A(Q)', which in output gave me '5' arrays of order 4 by 1.

OUTPUT of function A(Q) :

A =[[[ 0.        ]
     [ 0.        ]
     [ 0.19515612]
     [ 0.36477665]]

 [[ 0.19515612]
  [ 0.36477665]
  [ 0.244737  ]
  [ 0.42873321]]

 [[ 0.244737  ]
  [ 0.42873321]
  [ 0.16864666]
  [ 0.08636661]]

 [[ 0.16864666]
  [ 0.08636661]
  [ 0.05376605]
  [-0.57201897]]

 [[ 0.05376605]
  [-0.57201897]
  [-0.00935055]
  [-1.24923862]]

now I call this function (A(Q) inside a loop to multiply with another value 'B'

B's output is '5' arrays of order (2 by 4).

please ignore the rest of the code written below, it was written just to bring the arrays of 'B'.

these lines written below are only of your concern I guess, mainly their indentation.

B = matrix(element_vector, x_axis, y_axis)

C = B.dot(A(Q))

I wanted the values of 'C' (which is 5 arrays of order (2 by 1)).

I just wanted that, the five arrays of 'A' should multiply with the five arrays of 'B' and give the result of 2 by 1 array.

the two arrays are not multiplying the way I wanted. moreover, they both are multiplying inside a loop, so it is creating issues with indentation.

my code is too long, but I'm trying to send the part of it

def get_values(properties, X):
    x_axis   = properties['x_axis']
    y_axis   = properties['y_axis']
    elements = properties['elements']
    E        = properties['stiffnesses']
   
    # find the stresses in each member
    stresses = []
    
    for element in elements:
            fromPoint, toPoint, dofs = points(element, properties)
            element_vector = toPoint - fromPoint
            B = matrix(element_vector, x_axis, y_axis)
            C = B.dot(A)
            strain = (C[1] - C[0]) / norm(element_vector)
            stress = E[element] * strain
            stresses.append(stress)
            
            
    return stresses


OUTPUT OF B matrix :


B=                  ([[ 0.90906253, -0.41665972,  0.,          0,        ]
                     [ 0,           0,          0.90906253, -0.41665972]] ,
                    
                    [[ 0.93631071, -0.35117269,  0,          0,        ]
                     [ 0,          0,          0.93631071, -0.35117269]],
                    
                    [[ 0.9600172,  -0.27994102,  0,          0,        ]
                     [ 0,          0,          0.9600172,  -0.27994102]],
                    
                    [ 0.97894783, -0.20411062,  0,          0,        ]
                    [ 0,          0,          0.97894783, -0.20411062 ]] ,
                    
                    [[ 0.99228398, -0.12398588,  0,          0,        ]
                       [ 0,          0,          0.99228398, -0.12398588]] 

Thank you for the support!

[Desired operation][1]


[issues facing][2]


  [1]: https://i.stack.imgur.com/7mBtv.png
  [2]: https://i.stack.imgur.com/rU7j7.jpg


Solution 1:[1]

As suggested by SimonN, using numpy makes matrix operations quite simple. I can not deduce what your matrix function does, i.e. the function that returns B, but it should be simple enough to just incorporate the numpy notation in the function.

When defining a matrix in numpy, you use the np.array() command. You can use square brackets inside the round brackets to create any sort of matrix. Your dot product is simply calculated by using np.dot().

Your question is somewhat vague with regards to what you expect from your dot product. You can use indexing, starting from zero, to select the part of the matrix you want to calculate the dot product for, as shown in the code below. If the example is not entirely correct, and you can not fix it by changing the indexes, elaborate on what you want in the comment section below.

import numpy as np
# Define A matrix
A = np.array(
[[[ 0.        ],
  [ 0.        ],
  [ 0.19515612],
  [ 0.36477665]],
 [[ 0.19515612],
  [ 0.36477665],
  [ 0.244737  ],
  [ 0.42873321]],
 [[ 0.244737  ],
  [ 0.42873321],
  [ 0.16864666],
  [ 0.08636661]],
 [[ 0.16864666],
  [ 0.08636661],
  [ 0.05376605],
  [-0.57201897]],
 [[ 0.05376605],
  [-0.57201897],
  [-0.00935055],
  [-1.24923862]]])

#Define B matrix
B = np.array([
         [[ 0.90906253, -0.41665972,          0,           0],
          [          0,           0, 0.90906253, -0.41665972]],
         [[ 0.93631071, -0.35117269,          0,          0],
          [          0,           0, 0.93631071, -0.35117269]],
         [[ 0.9600172,  -0.27994102,         0,            0],
          [         0,            0, 0.9600172,  -0.27994102]],
         [[ 0.97894783, -0.20411062,          0,           0],
          [          0,           0, 0.97894783, -0.20411062]],
         [[ 0.99228398, -0.12398588,          0,           0],
          [          0,           0, 0.99228398, -0.12398588]]
        ])

# Dot product of individual matrices
C = []
for i in np.arange(len(B[0,0])): # get the number of columns in B
    C.append(np.dot(B[i], A[i]))
C

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1