Page images
PDF
EPUB
[merged small][merged small][ocr errors]

FINITE DIFFERENCES./

CHAPTER I.

NATURE OF THE CALCULUS OF FINITE DIFFERENCES.

Read

1. THE Calculus of Finite Differences may be strictly defined as the science which is occupied about the ratios of the simultaneous increments of quantities mutually dependent. The Differential Calculus is occupied about the limits to which such ratios approach as the increments are indefinitely diminished.

In the latter branch of analysis if we represent the independent variable by x, any dependent variable considered as a function of x is represented primarily indeed by 4 (x), but, when the rules of differentiation founded on its functional character are established, by a single letter, as u. In the notation of the Calculus of Finite Differences these modes of expression seem to be in some measure blended. The de-pendent function of x is represented by u, the suffix taking the place of the symbol which in the former mode of notation is enclosed in brackets. Thus, if u2 = (x), then

and so on.

[blocks in formation]

But this mode of expression rests only on a convention, and as it was adopted for convenience, so when convenience demands it is laid aside.

The step of transition from a function of x to its increment, and still further to the ratio which that increment bears to the increment of x, may be contemplated apart from its subject, and it is often important that it should be so contemplated, as an operation governed by laws. Let then A, prefixed to the expression of any function of x, denote the operation of taking the increment of that function correspond

B. F. D.

1

ing to a given constant increment Ax of the variable x. Then, representing as above the proposed function of x by ux, we have

[blocks in formation]

A

is the fundamental

Ax

ration of the Differential Calculus, so

operation of the Calculus of Finite Differences.

But there is a difference between the two cases which is not a

ought to be noted. In the Differential Calculus

du

dx

true fraction, nor have du and do any distinct meaning as symbols of quantity. The fractional form is adopted to express the limit to which a true fraction approaches. Hence

[merged small][ocr errors]

and not d, there represents a real operation. But in the

[ocr errors]

Calculus of Finite Differences is a true fraction. Its nuAx

merator Au, stands for an actual magnitude. Hence ▲ might itself be taken as the fundamental operation of this Calculus, always supposing the actual value of Ax to be given; and the Calculus of Finite Differences might, in its symbolical character, be defined either as the science of the laws of the operation ▲, the value of Ax being supposed given, or as the science of the laws of the operation In consequence of the funda

A

Δη

mental difference above noted between the Differential Calculus and the Calculus of Finite Differences, the term Finite ceases to be necessary as a mark of distinction. The former is a calculus of limits, not of differences.

2. Though Ax admits of any constant value, the value usually given to it is unity. There are two reasons for this.

First. The Calculus of Finite Differences has for its chief subject of application the terms of series. Now the law of a

series, however expressed, has for its ultimate object the determination of the values of the successive terms as dependent upon their numerical order and position. Explicitly or implicitly, each term is a function of the integer which expresses its position in the series. And thus, to revert to language familiar in the Differential Calculus, the independent variable admits only of integral values whose common difference is unity. For instance, in the series of terms 12, 22, 32, 42, ...

the general or th term is x2. It is an explicit function of x, but the values of x are the series of natural numbers, and Ax=1.

Secondly. When the general term of a series is a function. of an independent variable t whose successive differences are constant but not equal to unity, it is always possible to replace that independent variable by another, x, whose common difference shall be unity. Let (t) be the general term of the series, and let Ath; then assuming t = hx we have At hAx, whence Ax = 1.

=

Thus it suffices to establish the rules of the Calculus on the assumption that the finite difference of the independent variable is unity. At the same time it will be noted that this

A

assumption reduces to equivalence the symbols and A.

Ax

We shall therefore in the following chapters develope the theory of the operation denoted by A and defined by the equation

[merged small][merged small][merged small][ocr errors]

But we shall, where convenience suggests, consider the more general operation

[merged small][merged small][ocr errors][merged small]
« PreviousContinue »