'What should the input be for sigma from scipy.optimize.curve_fit for aymmetric uncertainty?
I read the documentation of scipy.optimize.curve_fit, but sigma is still unclear for me.
I did understand what sigma is here used for, but I did not understand, how should I actually use it with asymmetric uncertainty.
Documentation: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html#scipy.optimize.curve_fit
Detailed description:
The documentation says:
If we define residuals as r = ydata - f(xdata, *popt), then the interpretation of sigma depends on its number of dimensions:
A 1-D sigma should contain values of standard deviations of errors in ydata. In this case, the optimized function is chisq = sum((r / sigma) ** 2).
The definition of residual is fine. I will use it too, but I am confused with the 1-D sigma.
What I understood is (not sure if it is correct):
A 1-D sigma is a M-length sequence. Since the ydata here is a length M array, the value in the first position from sigma (sigma[0]) is the standard deviation for the errors of the value in the first position from ydata (ydata[0]).
Now, the question is, what should I use for sigma[0] if I have asymmetric errors?
I mean, if I have asymmetric errors by ydata[0] (e.g. +0.2 and -0.1), then I should calculate the standard deviation from 0.2 and -0.1?
Did I understand correctly, that those yerrors I got directly, should not be used as sigma here?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
