which may be written. (M, − x) + (M, − x) + ... + (Mm − x) = 0. (8) Equation (6) may be written d log p(M,x) . (M,-x)(M,- x) d M,- x) + (M1− x)dlog (M-x) +...+(Mm-x) _(M1—x)d M. —x) Comparing equations (8) and (9), we see that since the quantities (M, — x), (M ̧ — x), etc., are independent of each other, these equations may be satisfied by placing the coeffi cients of (M,- x), (M2 x), etc., in (9), respectively equal to the same constant, k. We have therefore Writing for (M — x) in general 4, we have d log p(4) = kAd▲, and, by integration, log (4) = 1⁄2k42 + log c, c being the constant of integration, x) From axiom III. it appears that as ▲ increases this quantity must diminish, and this requires the exponent of e to be negative. As 4' cannot be negative, it follows that k must be so. Writing therefore k=h', our equation becomes 7. Let us now consider the constant of integration c. This may be determined by substituting the value of p(4) in (4), giving us a special form of the integral known as the gamma function. For the purpose of integrating the expression, place n4 = t. As in this expression is involved only in the quadratic form, we evidently have 200 £_x® e = "dt = S__e="dt + S*° e-o dt = 2S,” e-"dt = 2A (in which we write the integral equal to A for convenience). 200 In the definite integral e-"dt the value will be the same if we write another symbol instead of t. Therefore In the second member of this equation write v = tu, dv tdu. Then = In this equation the constant will require further consideration; but if we assign any arbitrary value, as unity, to h we can readily construct the locus of the equation. It will at once appear that the general form will be that shown on page 5. Condition of Maximum Probability. 8. Substituting in equation 5) the values of p(4,), p(4.), etc., from (13), it becomes From this equation we see that P will increase in value as the exponent of e diminishes, or P will be a maximum when 42 + 4+...+4m2 is a minimum, thus giving us the important principle The most probable value of the unknown quantity is that which makes the sum of the squares of the residual errors a minimum. From this principle comes the name Method of Least Squares. The Measure of Precision. 9. Let us now consider the constant h. Substituting in equation (3) the value of (4), we have for the probability of an error between the values ± a If we take another series of observations, we have the probability of an error between±a' which equation will be satisfied by making ha=h'a', or We see from this equation that in two different series of observations will have different values, these values being to each other inversely as the errors to be ascribed with equal probability to each series. If, for instance, the errors of the first series are twice as great as those of the second, h will equal. The constant is therefore the measure of precision of the series of observations; and if its value could be determined from the observations themselves, we should by this means be able to know to what degree of confidence the data were entitled. This determination is possible,—at least approximately,—but for practical purposes it is more convenient to compare the relative accuracy of different series of observations by means of their respective probable errors, which will now be considered. The Probable Error. 10. The probable error of any observation of a given series is a quantity such that if the errors committed be arranged according to their magnitude without reference to the algebraic sign, this quantity will occupy the middle place in the series. It may therefore be defined as a quantity of such value that the probability of an error greater than this one is the same as the probability of one less. When we consider both plus and minus errors, we have from equation (15) the following expression for the probability of an error between±a, remembering that the probability between o anda is the same as between o and - a: (17) The whole number of errors being represented by unity, |