Infinitesimal calculus (also known as analysis): calculus with infinitely small numbers.

The calculus was developed in 17th century independently by Gottfried Leibniz and Isaac Newton to calculate the motions of bodies. Although they used different approaches, Leibniz and Newton still came up with the same results. Leibniz's approach was to regard a mathematical curve — for example, the graphic representation of an object's speed — as a set of infinitely many infinitely small points. Newton, by contrast, saw the same curve as represented by a composition of tangents, that is, slopes over each of its points.

Leibniz and Newton — Rivals of Infinitesimal Calculus

The two later engaged in a dispute over whose invention of calculus had come first. Since Leibniz was quicker to publish his results, and his writings were somewhat clearer and easier to understand than Newton's, it was his chosen symbols for the infinitesimal calculus that were eventually adopted by mathematicians. Even the term "infinitesimal calculus" was adopted from Leibniz; Newton's term for his version was "the method of fluxions".

The infinitesimal calculus constituted a revolution in mathematics and soon produced additional revolutions in the natural sciences and in technology. Whereas previously mathematicians had had to find a distinctive method for each individual problem, now they had a powerful all-around problem-solving tool. Mathematical and technical problems that had previously seemed like hard nuts to crack could now be solved in no time at all on a piece of paper.

The basic idea was to investigate the behavior of mathematical functions undergoing infinitely small changes. Assume that you have a function f(x) = x². On its basis you want to construct a new function f'(x) that determines the slope of f(x) for each value x. The mean value of the slope within the domain of the parameter d, that is, ranging from x to x+d, is determined by dividing the corresponding sections on the x and y axes:

So far, there is nothing novel here. The slope does not just depend on x but also on the length of the interval d. This does not as yet give us a function f'(x), which would depend only on x. The revolutionary idea was to assume that d was an infinitesimal, that is, an infinitely small number. Under this assumption the result is

because the infinitely small d can be dispensed with in favor of the finite value 2x. Many of us are still more or less familiar with this derivation of the function f(x) = x2 from high school calculus classes.

Although the argument in favor of dispensing with d in the context of the above equation is intuitively plausible and has delivered correct results for hundreds of years, it is not mathematically accurate. For, while d is not at first treated as equal to zero (after all, we divide by d), later on it is. This objection can help to make even your school math teacher despair, if s/he has been sloppy in explaining infinitesimal calculus. Indeed, the use of numbers that are infinitely small but do not equal zero was derided as absurd soon after its invention by the Irish bishop and philosopher George Berkeley in his pamphlet A Discourse Addressed to an Infidel Mathematician (1734).

It was not until the 19th century that the infinitesimal calculus received a mathematically strict formal appearance. The mathematicians Cauchy, Weierstrass, and Dedekind introduced marginal value analysis, which made the use of infinitesimals redundant. In marginal value analysis the ►actual infinity of these numbers is replaced by ►potential infinity:

In 1960 Abraham Robinson's ►non-standard analysis extended the set of real numbers by an additional number set that includes also the formally correct infinitesimals. This system of analysis does regard the earlier derivation of f(x) as a mathematically correct method even without marginal value analysis.

© Johann Christian Lotter   ■  Infinity  ■  Links  ■  Forum