Saturday, May 18, 2019

A Comparison of Linear Least Squares With Transverse Least Squares


  The most common method for fitting data uses linear least squares which minimizes the sum of the squares of the vertical errors or deviations from a straight line. Instead of the sum of squares one can use the expected value, an average, of the square of the errors for the objective function, V, to determine the vertical intercept, y0, and the slope, s, of the line.


An alternative is to use transverse errors, those normal to the line, instead of the vertical error. The following figure enables one to deduce the formula for the square of the transverse deviation from the line.



Expected values can also be used for an objective function and obtain formulas for the fit solution. One partial derivative gives a formula for the y-intercept of the line in terms of the slope s. Substituting this expression into the second partial derivative gives a quadratic formula for s. For the minimum value of the objective function both partial derivatives are set equal to zero.


To compare the two least squares methods we can generate randoms errors for points on a given line as well as expected values for the data.


Using the formulas for the two fits one can calculated the intercepts and slopes of the lines of best fit.



The two fitted lines tend to coincide as the magnitude of the errors decreases. For the data above the relative error of the coefficients of the lines, y0 and s, from the original line is smallest for the transverse errors.


Supplemental (May 19): What happens if one rescales the axes? For ordinary least square rescaling the y-axis rescales the coefficients by the same factor. The two axes can be rescaled independently so the axes can have different units of measure. With the transverse least squares there is a tacit assumption that the axes have the same units so one cannot rescale them independently. In this case the fit would not be independent of the coordinate system.

Correction (May 21): The values given for the relative errors for fits are the sum of the square of the errors. The correct values are 0.0926 (fit1) and 0.0932 (fit2).

No comments:

Post a Comment