Wednesday, June 30, 2021

More on the Principle of Least Action

   It appears that the way the variation is done affects the minimum of the action integral. If one arranges the variation so that it is normal to tangent of the curve the number of terms contributing to the action integral is reduced to two.




Here's an exaggerated variation to make it more visible.



The variation in the segments contributing to the variation of the action integral show a fairly uniform variation with positive signs.



The resulting action integral now has a minimum when the magnitude of the variation is zero and we can rightfully refer to it as Least Action.



Sunday, June 27, 2021

Formulas for v·dr

The formulas for v·dr modified to contain the variations are,


with t being the tangent unit vector to the trajectory gives,




Saturday, June 26, 2021

The Principle of Least Action

 Maupertuis loosely introduced the Principle of Least Action in 1741. A few years later in 1744 in Methodus inveniendi Euler associated Action with an integral that was a minimum for the path a mass actually followed, he wrote ∫ds√v = min with √v corresponding to the speed of the mass. He wrote,



I tried to check if the integral was actually a minimum for the mass moving in a uniform gravitational field as in fig 26 (see translation of Methodus inveniendi in Wikisource),



but the minimum didn't correspond to the motion of the particle even when the integral was replaced by δ∫v·dr.

The numerical integration was fairly straightforward. We start by defining some constants.



The variation of δy, was assumed to be a simple some function with k cycles on the interval of x equal to (0,b) with b=200. 



p=dy/dx was determined by fitting the values of x, x2, and y, which enabled the components of the velocity to be calculated.



We were then able to calculate δv•dr for each point of the trajectory of the mass. To numerically integrate this over each Δx a three point fit of δv•dr was evaluated and used.



An exaggerated variation of δy is shown in the following figure.



When the δ∫v·dr is computed for small values of the amplitude of the variation the minimum is slightly offset from the zero.







Saturday, May 22, 2021

LS Fit Using Orthogonal Polynomials over the Data Interval

   One can make the vectors or functions used to compute the fit coefficients more symmetrical if one defines a set of orthogonal polynomials on the interval covered by the data. The data for the 9 sets of measurements covered the interval [0,2]. We can start by setting p0=1 and letting p1=ax+b. When we integrate the product of the two polynomials over the interval [0,2] and set it equal to zero, we are left with one undetermined constant so to be more definite we can choose the coefficients so they are small integers. We then get p1=x-1. Setting p2=ax2+bx+c and setting the integrals of the products of p2 with the other two polynomials equal to zero we are again left with one undetermined coefficient which we can again set be equal to small integers. And we get 

p2=3x2-6x+2.

  Using the same formula for λ we get the following results.



The λ functions look more symmetrical but the accuracy of the fit doesn't seem to be improved.


Tuesday, May 18, 2021

Validity of the Formula for the Uncertainty in the Fit Coefficients

   My derivation of the uncertainty in the fit coefficients assumed that the expected value <δyjδyk>=0 for k≠j.



  It is not obvious that this is so but we can demonstrate it in the following manner. One starts by creating an array of random numbers with mean μ=0 and standard deviation σ=0.3. In this case the array contained 30 random numbers. 



  Next we multiply two numbers in the array with each other. Since there are 30 choices for each number we end up with a 30x30 grid of products.



  We can now compare the sum of all the products with the sum of the squared products, the diagonal terms in the 30x30 grid. This process was repeated 100 times to get the mean values and the variations.



  The two averages for the 100 trials are approximately equal which suggests that the expected value for the product for two uncorrelated random numbers is negligible. Only the sum of the squared terms appear to contribute to variations of the ai and we conclude that the two formulas are approximately equivalent.  The lower values for the variation of simpler sum suggests that it is the better estimate. One would expect these results to hold for a large number of trials.


Supplemental (May 18): Perhaps a simpler way of demonstrating the expected value of the product of two normally distributed random numbers is negligible is to do a large number of products and compute the average.



  The bottom column shows the averages of 200 products repeated 50 times with the average and standard deviation of the 50 averages.


Supplemental (May 19): One can show that the expected value for the product of two independent normally distributed random errors is,




Thursday, May 13, 2021

Determining the Uncertainty in the Coefficients

   One might wonder what the uncertainty in values of the coefficients in the last blog might have been. Repeated "experiments" shows there is some variation present in the results. Can we get a better measure of it?

  The ith coefficient is equal to the scalar product of a vector, λi, and the y value. For the polynomial fit, 







  Since only the squared terms of (λiTδy)2 contribute to σai2, the average of each δyk being approximately zero and each δyk2 approximately σ2, one can show,


  So for the 9 experiments we then get,




  Note that errors of a are bounded by 3σ. With this information we can properly design an experiment.


Supplemental (May 14): I failed to mention that the σy used for the y averages was σ/√9 with σ being the value for a single measurement of y so the values in the table above are σ=|λ|σ/3.

Tuesday, May 11, 2021

Combining Multiple Experiments

   If one repeats the same experiment a number of times the errors become more uniformly distributed.



A plot of the data shows a more uniform spread.



We would expect the error of the average value of y for each x value to be lower. So, instead of fitting the y values of the individual experiments we can fit the average value of y.



  The normal equation table is computed as before from which one can obtain the fit coefficients.



It is seen that the result is an improved fit. If σ is the standard deviation for an individual experiment one would expect the standard deviation for the average of n experiments to be about σ/√n.