'Matrix inversion using Neumann Series giving funny loss function

According to (steward,1998). A matrix A which is invertible can be approximated by the formula A^{-1} = \sum^{inf}_{n=0} (I- A)^{n}

I tried implementing an algorithm to approximate a simple matrix's inverse, the loss function showed funny results. please look at the code below. more info about the Neumann series can be found here and here

here is my code.

 A = np.array([[1,0,2],[3,1,-2],[-5,-1,9]])
class Neumann_inversion():
    def __init__(self,A,rank):
        self.A = A
        self.rank = rank
        self.eye = np.eye(len(A)) 
        self.loss = []
        self.loss2 =[]
        self.A_hat = np.zeros((3,3),dtype = float)
        #self.loss.append(np.linalg.norm(np.linalg.inv(self.A)-self.A_hat))
    
    def approximate(self):
       # self.A_hat = None
        n = 0
        L = (self.eye-self.A)
        
        while n < self.rank:
            
            self.A_hat += np.linalg.matrix_power(L,n)
            loss = np.linalg.norm(np.linalg.inv(self.A) - self.A_hat)
            self.loss.append(loss)
            n+= 1
            
        plt.plot(self.loss)
        plt.ylabel('Loss')
        plt.xlabel('rank')
        # ax.axis('scaled')
            
        return 
    
   
Matrix = Neumann_inversion(A,200)
Matrix.approximate()


Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source