'Closed form solution to single layer perceptron
A single layer perceptron is easy to covert to the form:
A @ x = b
Where:
A is a matrix of shape (m,n),
x is of shape (m),
and b is of shape (n).
(Apologies if the shapes are transposed... in ML, the first dim is usually the y axis not the x due to row-major stuff, but I think in normal matrix math, the first axis is the x axis).
Can I use the moore-penrose approximation of inverses to calculate the OLS best fit approximation of A?
I suspect this is trivial high school linear algebra.
Solution 1:[1]
Yea, it was high school linear algebra.
A @ x = b
A @ x @ x^-1 = b @ x^-1
A = b @ x^-1
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Yaoshiang |
