Friday, December 5, 2014
Least Squares Estimates of the Mean and Standard Deviation and a Sample Calculation
Determining the mean and standard deviation a set of measurements is very difficult to do exactly. One has to make an additional assumption in order to get an estimate of the mean but that adds a little more error to the measurement errors. The assumption used in least squares is that variance V, the sum of the square of the errors, is a minimum. This gives the average as the best estimate of μ. Once we have μ we can then estimate the errors which in turn the variance and standard deviation, σ.
I generated 2000 sets of 20 random numbers to test the formulas used in statistics and the procedure gives the mean, standard deviation and z-values. We also find that the root mean square of the z-values for each set of numbers is exactly 1. Using n-1 in the denominator of the standard deviation formula causes this to deviate slightly. Setting the rms z value equal to 1 is another starting point for the determination of μ and σ.
Each set of numbers will have an mean and standard deviation and there is a little variation among the results for the 2000 data sets but the over-all average is close to the chosen values for μ and σ. The rms variation in the mean is approximately equal to the standard deviation of x divided by the square root of n, the number of values in each data set.
Using the theory of errors from in the last couple of blogs we can calculate the first two moments and standard deviation for the data sets. The σ standard deviation is very close to the μ standard deviation divided by √2 as predicted.
Here is a comparison of the 2000 σ standard deviations with the two estimates of σ.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment