- If you know the instantaneous velocity of a moving object, then you
can make a pretty good estimate of how far it will travel in a short time
interval (although how good your estimate is will depend on how much the
object happens to be accelerating or decelerating at the time).

This is the key idea behind the method of

If you estimate the position of the object by neglecting changes in
velocity, then the result is a **Linear
Approximation** to the true position function, and its graph is just
the tangent line to the true graph. So this is often also called the **Tangent
Line Approximation**.

An animated diagram illustrating this approximation is part of a set prepared by Doug Arnold at Penn State. You can choose either a java applet or an animated gif version.

The Differential and Tangent Line Approximations can be used to estimate experimental errors and to get approximate results for calculations that would be hard to do exactly.

We can also use the tangent line to approximate the solution of an equation
(eg. If we start fairly close to an x-intercept for the graph of a function,
then the tangent line may well cross the axis even closer - and the intercept
of the tangent line can be found by solving just a linear equation). Repeating
this process over and over is called **Newton's Method**. It will often
give a very rapidly converging sequence of approximations to an exact solution.
These
notes at UBC and this
applet by Dan Sloughter at FurmanU both deal with Newton's Method.

(The process used here of taking the result of one approximation step
as the starting point for another similar step is often called **recursion**,
and the idea of recursion is the basis for many other approximation methods
in addition to Newton's method)

The error in the Tangent Line Approximation is due to the curvature
of the graph. We can try to match the curvature at a point as well as the
slope and height by using a quadratic approximation.

(This should work because there are three conditions to match and a
quadratic has three coefficients for us to choose.) And for even better
approximations we might match higher derivatives by using higher degree
polynomials. These are called **Taylor Polynomials**.

You can explore these ideas further in this
online lab from UBC. This applet illustrates the special case of f(x)=sin(x)
near x=0.

For each degree of Taylor Polynomial, the errors can be bounded in terms of the next higher derivative. A proof of this is given in 'Eric's Treasure Trove of Math' (since published as the CRC Encyclopedia of Mathematics)(reviewed by Sahar Khalili)

If we keep adding terms to get Taylor Polynomials of higher and higher
degree, then we get an infinite series
(called a **Taylor Series)**. This
UBC lab deals with some of the convergence and related issues regarding
such series.

You might also check our 'raw list' (of links provided without comment) to see if there are any more examples there that we haven't yet included here.

If you have come across any good web-based illustrations of these and
related concepts,

please do let
us know and we will add them here.

Review Contents of Math&Stats Dep't Website | ....... | Give Feedback | ....... | Return to Langara College Homepage |