'Softmax activation function using math library
I am trying to develop a softmax function in python for my backpropagation and gradient descent program. I am calling the softmax function after I get my outputs of the output layer (2 outputs), the outputs are in a vector-like so [0.844521, 0.147048], and my current softmax function I have implemented is like this:
import math
vector = [0.844521, 0.147048]
def soft_max(x):
e = math.exp(x)
return e / e.sum()
print(soft_max(vector))
However, when i run it i get the following error
TypeError: must be real number, not list
Note: I only want to use the math library and no others
Solution 1:[1]
The function math.exp only works on scalars, and you can not apply it to the whole array. If you only want to use math than you need to implement it elementwise:
import math
def soft_max(x):
exponents = []
for element in x:
exponents.append(math.exp(element))
summ = sum(exponents)
for i in range(len(exponents)):
exponents[i] = exponents[i] / summ
return exponents
if __name__=="__main__":
arr = [0.844521, 0.147048]
output = soft_max(arr)
print(output)
However I still want to emphasise, that using numpy would solve the problem a lot easier:
import numpy as np
def soft_max(x):
e = np.exp(x)
return e / np.sum(e)
if __name__=="__main__":
arr = [0.844521, 0.147048]
output = soft_max(arr)
print(output)
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | desertnaut |
