« PreviousContinue »
NATURE OF THE CALCULUS OF FINITE DIFFERENCES.
1. THE Calculus of Finite Differences may be strictly defined as the science which is occupied about the ratios of the simultaneous increments of quantities mutually dependent. The Differential Calculus is occupied about the limits to which such ratios approach as the increments are indefinitely diminished.
In the latter branch of analysis if we represent the independent variable by x, any dependent variable considered as a function of x is represented primarily indeed by $ (w), but, when the rules of differentiation founded on its functional character are established, by a single letter, as u. In the notation of the Calculus of Finite Differences these modes of expression seem to be in some measure blended. The dependent function of x is represented by Ug, the suffix taking the place of the symbol which in the former mode of notation is enclosed in brackets. Thus, if u = $(x), then
их. = Ф (х + 1),
Uzin z = $ (sin o), and so on.
But this mode of expression rests only on a convention, and as it was adopted for convenience, so when convenience demands it is laid aside.
The step of transition from a function of wc to its increment, and still further to the ratio which that increment bears to the increment of x, may be contemplated apart from its subject, and it is often important that it should be so contemplated, as an operation governed by laws. Let then A, prefixed to the expression of any function of x, denote the operation of taking the increment of that function correspondB, F. D.
ing to a given constant increment Ax of the variable x. Then, representing as above the proposed function of x by uzy
Urtax – Ux
d Here then we might say that as
is the fundamental opedac
ration of the Differential Calculus, so is the fundamental
Ax operation of the Calculus of Finite Differences. But there is a difference between the two cases which
du ought to be noted. In the Differential Calculus is not a
dx true fraction, nor have du and dx any distinct meaning as symbols of quantity. The fractional form is adopted to express the limit to which a true fraction approaches. Hence d de'
and not d, there represents a real operation. But in the
Calculus of Finite Differences is a true fraction. Its nu
Ax merator Auz stands for an actual magnitude. Hence A might itself be taken as the fundamental operation of this Calculus, always supposing the actual value of Ax to be given; and the Calculus of Finite Differences might, in its symbolical character, be defined either as the science of the laws of the operation A, the value of Ax being supposed given, or as the science of the laws of the operation In consequence of the funda
Axo mental difference above noted between the Differential Calculus and the Calculus of Finite Differences, the term Finite ceases to be necessary as a mark of distinction. The former is a calculus of limits, not of differences.
2. Though Ax admits of any constant value, the value usually given to it is unity. There are two reasons for this.
First. The Calculus of Finite Differences has for its chief subject of application the terms of series. Now the law of a series, however expressed, has for its ultimate object the determination of the values of the successive terms as dependent upon their numerical order and position. Explicitly or implicitly, each term is a function of the integer which expresses its position in the series. And thus, to revert to language familiar in the Differential Calculus, the independent variable admits only of integral values whose common difference is unity. For instance, in the series of terms
1, 2, 3, 4, ... the general or oth term is a?. It is an explicit function of x, but the values of x are the series of natural numbers, and Ax= 1.
Secondly. When the general term of a series is a function of an independent variable + whose successive differences are constant but not equal to unity, it is always possible to replace that independent variable by another, x, whose common difference shall be unity. Let $ (t) be the general term of the series, and let At=h; then assuming t = hx we have At=hAx, whence Ax= 1.
Thus it suffices to establish the rules of the Calculus on the assumption that the finite difference of the independent variable is unity. At the same time it will be noted that this assumption reduces to equivalence the symbols and A.
We shall therefore in the following chapters develope the theory of the operation denoted by A and defined by the equation
Uxt But we shall, where convenience suggests, consider the more general operation
where Ax = h.