will be correct up to the with term, yet the development must not be canned beyond that term, and the limit will be the symbolical expression for the sum of all the terms after it. In the latter case the original function, when broken into factors, contains one or more of the form x~m; and thus becomes infinite when x = 0. Of this kind of failure cosec x and cotan x are instances: see Ex. 3 and 4, Art. 88; for of each of these when expanded x~l is the first term. And therefore their equivalent series cannot be made to correspond with that of Maclaurin. Another similar case is that of log x; because this, and also all its derived-functions, = cc when x = 0. i Also another case is e-*5; because it and all its derived-functions vanish when x = 0; and hence we infer that this function does not admit of development into a series of ascending integral and positive powers of x, which alone is given by Maclaurin's Series. Thus if the function to be expanded by Maclaurin's Theorem is F(#) =f(x) + e~^, the development of F (x) will be that of /(x) alone; and therefore will not be correct. There may also be many other functions which are not capable of development in the forms of Taylor's and Maclaurin's Series; and therefore the student must be on his guard against attempts at expansion in a form not suited to the function. The above remarks must be considered as general hints rather than as a secure and scientific enumeration of cases of applicability and non-applicability; the latter is beyond our knowledge, and therefore with the former, however incomplete, we must be content. There is also another reason why we have treated thus at length of the cases where Taylor's and Maclaurin's Theorems fail; some foreign and most English writers have attempted to raise the Differential Calculus on them as its basis. As far as they are applicable, reasoning founded on them may be correct; but since they are not universally so, it is objected, and validly objected, that the basis of the Calculus is confined within limits narrower than are necessary. And no criteria are known for determining whether functions can be expanded in their forms or not, before such principles of continuity, as those which we have made fundamental, have been elucidated; on the expansion-principle therefore we are left to grope our way in the dark, being uncertain whether the matter which we are discussing is within its comprehension or not. Section 2.—On functions of many variables. 137.] In Chapter IV. certain relations have been investigated between functions and derived-functions of one variable x, and chiefly when many of the derived-functions vanish for particular values of the variables. Similar theorems are true of functions of two and more variables; but I was unwilling in that Chapter to interrupt the discussion of a difficult question, or in the following Chapter to break in upon the immediate application of it, by introducing functions of a more complex order. Now we must return to the subject, and I shall shew that for functions of two and more variables, theorems are true analogous to those which have been proved for functions of one variable. We will take at first a function of two independent variables x and y. Let v (x, y) be the function under consideration; and let us suppose the function to be finite and continuous for all values of x and y between x0 and x0 + h, y0 and yo + k; and let us moreover suppose all and each of the several and successive partial derived-functions up to the rath to vanish, when x = x0 and y = y0; but the rath not to vanish nor to become infinite at these values: then the following theorem is true, as I shall just now prove, K(*o + h,y0 + k)-f(*o, yo) = [h» + ? (■^ly) • • • n / dnv \ , , , idnY\ , "1 the meaning of the notation at the end of the right-hand member being that x and y are to be respectively replaced by x0 + $h and y^ + Ok. Let us suppose the finite increments of x0 and y0 to be ht and kt; so that ultimately they become h and k, when t = 1; thus we may consider r(x0 + ht, y0 + kt) to be a function of 80 that F (*o + ht, y0 + kt) = f{t); (28) and therefore r (x„, y0) = /(0). (29) Now by (20), Art. 115, if f(t) is finite and continuous for all values of t between t = t and / = 0; and if moreover all the derived-functions of f(t) up to the (» — l)th inclusive vanish when / = 0, but if/" (/) does not vanish and is not infinite; then t" (30) 1.2...n' To apply this to the present case, I must deduce from (28) the several derived-functions oif(t). To simplify the process, let we have and observing that when t = 0, X'=xq and y'= \ the notation in the last line indicating that # and y are replaced by Xq and y0 respectively. If therefore f^-) = 0, and (^) = 0, these circumstances are equivalent to the vanishing of f'(t), when t — 0. Again, r'm - d-f'(t) -(d*v\ dx'* M 2 ( d** \ dx'dlJ dy'2 bearing in mind that / is equicrescent; otherwise (30) would not be true. ••• rm-(£),*•+•(£) M and therefore if all the second partial derived-functions of F(x, y) vanish, when x = x0 and y = y0, these circumstances are equivalent to the vanishing of f"(t), when t = 0. Similarly the notation being used to signify that x and y are replaced by af and y. And thus if all the partial derived-functions of F (x, y) of the (ra—l)th order vanish when x = x0 and y = y0, these circumstances are equivalent to the vanishing of /"_I(0); and fn(t) is given by (36). Suppose now all the partial derived-functions of F(#, y) up to the (« —l)th inclusive to vanish, when x = x0 and y — y0: but that all the wth do not vanish; then substituting in (30) we have F (x0 + ht, y0 + kt) — F (x0, y0) ~ 1.2.3... «LVWA + 1 \dx»-> dy> * *+- ■•• +1 (anF1' + J*+!K M In this equation let t = 1; then F (j?0 + A, y0 + k) — F (x0, y0) which is the theorem required to be proved; and is analogous to that contained in equation (16), Art. 114. If the given function is of more variables than two, and if all its several partial derived-functions up to those of the (» —l)th order inclusive vanish for particular values of the variables, a similar theorem is also true. But as the proof is the same as that which has been applied to a function of two variables, it is unnecessary to insert it. 138.] Suppose ,r0 and y0 to be each zero in (38); then, replacing h and k by x and y, we have ,<„,_,<„,„,. _L_[(g)4. + (^)4.-.i + ... •■■+(a&J"" + GFHjj.m in which all the derived-functions of F {x, y) up to those of the (»—l)th order inclusive vanish, when x = 0 and y = 0. This theorem is analogous to that of equation (19), Art. 115. 139.] If the ratio to each other of two functions of two or more variables is for particular values of the variables indeterminate, and if a function of two or more variables takes for particular values of the variables one or other of the indeterminate forms given in Sect. 3, Chap. V, we are hereby enabled to evaluate them. We will take functions of two variables only, because the expressions are shorter; and the method required for those of more variables is the same in form. Let r(x, y) and *(#. y) be two functions of x and y which vanish when x = x0 and y = y0; and let us take the most general case, and assume that all the successive partial derived-functions up to those of the (n — l)th order inclusive also vanish for these values of the variables; and that all the nth derived-functions neither vanish nor become infinite; then by (38), n 1 d"v \ , , , /d"r\, ~| - +lidx^>hk +W"'k(40) yo + 9 k, y0 + *) = [(g) A»+J (j*?^) *-»* + ... n 1 dn* \ , , , tdn*\.."] - +T(s^=i)**-l + M*'J^+;j (4i) y0 + Ok. Let us suppose h and k to become infinitesimal; to become, that is, dx and dy; then if we neglect those terms which are infinitesimal and are added to finite quantities, and divide (40) by (41), we have v (zp, yo) _ (^-^) dx" + dxn~1dy+... +n( , r) dxdy"-1 + Crr—) dyn \dx"10 \dx*-xdyl0 y ^dxdy"-1^ V To take particular cases, let us suppose the functions to vanish when x = x0 and y = y0, and their first derived-functions not to vanish; then Price, Vol. 1. G g |