Thus, if mp is the ordinate corresponding to the nth point of partition of OA, MP is proportional to the nth term of the developed binomial, and expresses the probability that the event will be compounded of a repeated m-n+1 times, and of B repeated n-1 times; that is, MP corresponds to the probability that the number of repetition of the events a is to the number of repetition of the events B in the ratio of am to mo. This curve is called the curve of possibility; and an examination of the several terms of the binomial will at once shew the general course of it. Let the origin be at the foot of the ordinate which corresponds to the first term; then, as the values of the terms of the binomial continually increase up to a maximum, so will the ordinates of this curve increase, and ultimately attain to a maximum ; and afterwards they will decrease as the terms of the binomial series decrease, until the curve comes to a point whose abscissa is on, which is nearest to the x-axis because it corresponds to the last term of the series. If p=9, the curve will be symmetrical on the two sides of its greatest ordinate; but such will not be the case if p and q are not equal. 268.] Now a most important application of the preceding theory is that to the combination of errors of observation. All observations of physical facts, whether made by instruments or otherwise, are subject to certain elementary errors, of which the number is indeterminate, and the causes are unknown; all are supposed to be independent of each other, and to be of the same absolute magnitude, but to be either positive or negative. The true error in each particular case is the algebraical sum of these elementary errors, and the probable true error is the object of our inquiry. In applying to this case the theory of the preceding article, I shall take the existence of a positive elementary error to be the event A, and that of a negative elementary error to be the event B; so that the number of cases which produce a and B respectively is the same ; consequently a = b, p=9= 5 Let each elementary error be denoted by ax; and let 2m = the number of them, each of which may be either positive or negative. Then the several terms give the probabilities of the combinations of the repetitions of the positive elementary errors with those of the negative elementary errors: as the two terms 12 m of the binomial are equal, the middle term of the development, which is the (m+l)th term, is the largest, and terms equidistant from it are equal. Thus the combination of an equal number of positive and negative elementary errors is the most probable, and the probability of the combination of 2m-n positive with n negative errors is equal to that of the combination of n positive errors with 2m-n negative errors. Hence the most probable event is that in which there is a compensation of errors, and consequently no resultant error; and the cases that are most likely to occur after this are those in which the errors, whether positive or negative, are small. · Now in this case the curve of possibility is symmetrical re latively to its greatest ordinate ; and this corresponds to the middle term of the series. Let the point where this greatest ordinate intersects the axis of x be the origin; and along the X-axis in both directions take a series of lengths, each of which = ax, and thus corresponds to an elementary error; and at each of the points on the x-axis thus determined, draw ordinates proportional to the corresponding terms of the expanded binomial. Let yo, Yı, Y2, ... be the ordinates thus drawn; then 2 - 2-2m 2m (2m-1)... (m + 2) (m+1) (9) 1.2.3 ... (m— 1)m = 2-2m 2 m (2m-1) ... (m+2) 1.2.3 ... (m-1) . . . . . . . . . . . . . 1.2.3 ... (m—k) 1.2.3 ... (m-k-1) (13) Let us suppose the number of causes which produce these elementary errors, and consequently the number of errors, to be infinite, so that m=00; and let us suppose the error due to each one to be infinitesimal, so that ax becomes dx ; then the curve of possibility becomes a continous curve; and if we take yx to be a general value of the ordinate, to be y, say, Yk+1=y+dy; also k dx = x;-so that multiplying the numerator and denominator of the second member of (13) by dx, (13) becomes PRICE, VOL. II. 30 y+dy m dx — X 2x + dx (14) m dx? + x dx + dx2 Now with reference to the terms in the numerator and denominator of this last fraction, which must be omitted, it will be observed that m dx represents the whole length of the x-axis, inasmuch as an infinite number of causes allows it to be extended to any distance; it is indeed that distance beyond which no error can be supposed to reach; and this distance is evidently infinite, inasmuch as we know of no limit to the number of errors; consequently m dx will be in the general case infinitely greater than x; and thus if m dx? = , (14) becomes . 2. 1 (16) (17) .. log ? =-h2 v2; Yo since y = yo, as given in (9), when x = 0; hence the equation of the curve of possibility in this case is g = 9, e-box; and this equation gives the relation between the possibility of an error and the magnitude of that error; that is, if the abscissa represents an error, the ratio of the corresponding ordinate to the greatest ordinate represents the possibility of that error. The curve whose equation is (17), cuts the axis of y at a distance y, from the origin, and the tangent at the point of intersection is parallel to the axis of x, and y, is the maximum ordinate. The axis of x is an asymptote; the curve always lying on the positive side of it; and there are points of inflexion when The curve is also evidently symmetrical relatively to the y-axis. Since the ordinates to this curve are proportional to the possibilities of the errors which correspond to the several abscissæ, a variation of y, will change all the ordinates in the same ratio, and consequently will produce no alteration in the relative values of the ordinates. Any variation of h however will produce an important change in the curve, for the greater h is, the more rapidly does the curve approach its asymptote, and the less do the ordinates become for a given abscissa, and consequently the less is the possibility, or the probability, of a given error. Thus h is a measure of the precision of the observations.* 269.7 If the curve of possibility is also the curve of probability, then the sum of all the probabilities = 1: and as the probabilities are measured by the several ordinates, it is necessary that the sum of all these ordinates, that is, the area contained between the curve and the x-axis, should be equal to unity. Hence 1 = lye-nx de so that this last equation is that of the curve of probability. 270.] We can also hereby determine the probability of a given error of observation, say x; for if p = the probability required, it is equal to the ratio of its particular possibility to the sum of all the possibilities. Hence = y dx ; if y is the ordinate of the curve of probability given in (18); so that the probability of an error x is equal to the area-element y dx of the curve of probability. Hence, if p is the probability of an error which is not greater than r, (20) If we compare two observations of different precisions, so that the * On the subject of the measure of precision (mensura præcisionis), as also on other properties of the preceding function, see Gauss, “Motus Corporum Cælestium," art. 178. possibility of error is the same in both cases, that is, so that Yo is the same in both cases, then ha x2 = h'2 x2; and consequently noin that is, the errors vary inversely as the precisions. 271.] Other very important problems in the theory of probabilities are the determinations of the probability of an event, and of a precedent cause of that event, by means of certain events which have been observed. In this case the cause and the probability of its action are supposed to be unknown, so that the problem is the determination of its probability, from a given number of observed events. When the number of possible causes or of hypotheses is finite, we have the following theorem. Let h1, h2, ... h, be the probabilities of the n possible causes, of which let the probabilities as shewn by observed events be respectively P1, P2, ... Pn; then, as the sum of the probabilities of the several possible causes = 1, hi ha (21) P1 P2 Prz.p' and thus the probability of each possible cause is assigned by means of the observed events; h and p are called respectively the a priori and a posteriori probabilities of a possible cause. Now when this theory is applied to the probabilities of physical facts, the number of hypothetical causes which may be assigned as the fore-runners of these facts is infinite, and the probability of each cause may have any value between the limits 0 and 1; thus, in this case the denominator of (21) is the sum of a continuous series and becomes a definite integral, and the integral calculus is required for the investigation of its properties. Suppose then we have two contradictory events A and B, of which a has already occurred m times, and B, n times; and of the producing causes of which we know nothing beyond that which these facts supply; then, if x = the probability of A, 1- x = the probability of B; and x may have all values ranging from 0 to 1. Then the a posteriori probability of an event compounded of a repeated m times and of B repeated n times is represented by an expression of the form kvm (1-)": and the probabilities of all the possible producing causes will be given by this formula when x varies from 0 to 1; so that as we have hereby a continuous varia |