Numpy Std (standard Deviation) Function Weird Behavior
Solution 1:
The Numpy documentation for std
states:
The standard deviation is the square root of the average of the squared deviations from the mean, i.e.,
std = sqrt(mean(abs(x - x.mean())**2))
.The average squared deviation is normally calculated as
x.sum() / N
, whereN = len(x)
. If, however, ddof is specified, the divisorN - ddof
is used instead. In standard statistical practice,ddof=1
provides an unbiased estimator of the variance of the infinite population.ddof=0
provides a maximum likelihood estimate of the variance for normally distributed variables. The standard deviation computed in this function is the square root of the estimated variance, so even withddof=1
, it will not be an unbiased estimate of the standard deviation per se.Note that, for complex numbers, std takes the absolute value before squaring, so that the result is always real and nonnegative.
For floating-point input, the std is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for float32 (see example below). Specifying a higher-accuracy accumulator using the dtype keyword can alleviate this issue.
a = np.zeros((2, 512*512), dtype=np.float32) a[0, :] = 1.0 a[1, :] = 0.1 np.std(a) >>>0.45000005
but for
float64
:
a = np.zeros((2, 512*512), dtype=np.float64) a[0, :] = 1.0 a[1, :] = 0.1 np.std(a) >>>0.45
Solution 2:
I tried it and got same results. This says it's a bug for Numpy. It seems this happens when you use small numbers. https://github.com/numpy/numpy/issues/8207
Post a Comment for "Numpy Std (standard Deviation) Function Weird Behavior"