METRIC Glossary

Term
Definition
absolute value See modulus
argument

The complex number z=x+i\,y can be represented in the Argand diagram by the vector $$\left[ \begin{array}{c}x\\y \end{array}\right],$$ and the argument of z, written \arg z, is simply the direction of this vector. This direction is expressed as the angle from the horizontal axis to the vector representing z, with anticlockwise being the positive direction.

In order to ensure that every complex number has a unique argument, the argument of a complex number always lies within a certain interval of length 2\pi. Two conventions exist: some mathematicians insist that $$0\le\arg z<2\pi,$$ and others that $$-\pi<\arg z\le \pi.$$ Note that $$\tan \,(\arg z)=\frac{y}{x},$$ but that it is not always the case that $$\arg z = \tan^{-1} \frac{y}{x}.$$ In the figure, for example, which shows the complex number $$z=-1+\sqrt{3}\,i,$$ the argument is 2\pi/3 radians, whose tangent is -\sqrt{3}; however, $$\tan^{-1}(-\sqrt{3})=-\pi/3\ne2\pi/3.$$

The argument of a complex number: <span class='math'>\arg(-1+\sqrt{3}\,i)=2\pi/3</span>

Figure 1: The argument of a complex number: \arg(-1+\sqrt{3}\,i)=2\pi/3

arithmetic sequence, series

An arithmetic sequence (or series) is a sequence (or series) in which each term is obtained from the last by addition of a constant quantity (known as the common difference). For example, $$1,3,5,7,9,\dots$$ is an arithmetic sequence and $$1+3+5+7+9+\dots$$ is the corresponding series: the common difference is 2.

close window

base In the expression $$3^5,$$ the number 3 is known as the base. In the expression $$\log_2 128,$$ the base is 2.
best fit curve

Consider a data set consisting of coordinate pairs, and suppose that it's known, or at least conjectured, that this data reflects a certain kind of relationship between the variables: a linear one, perhaps, or a quadratic one (this "relationship type'' is called the model).

Suppose we wish to find the relationship between the variables, as best we can. However, suppose that measurement error, or some other source of "noise", makes each data point's location slightly imprecise. Then there may be no curve conforming to the model that exactly fits the data, in which case what we want is a curve that accords with our model and that (in some sense) fits the data "best''.

What we usually mean by "best'' is the least squares best fit curve, in which the curve is chosen, from among all those allowed by our model, to minimise the sum of the squares of the vertical distances between the data and the curve. The figure shows some data together with its least squares best fit quadratic; the vertical distances (which are known as residuals) are also shown.

Figure 1: A data set (red) together with a least squares best fit quadratic curve (blue), 

Figure 1: A data set (red) together with a least squares best fit quadratic curve (blue),
showing vertical distances or residuals (pink)

Binomial theorem For positive integer n, $$(a + b)^n = a^n+ na^{n-1} b+ \frac{n (n-1)}{2!} a^{n-2}b^2 + \frac{n (n-1) (n-2)}{3!} a^{n-3}b^3 + \dots + b^n.$$ For negative or fractional r, $$(1+x)^r = 1+rx+\frac{r (r-1)}{2!}x^2 +\frac{r (r-1) (r-2)}{3!}x^3+ \dots$$
bounded (region)

A bounded region in a space is one all of whose points lie within a set finite distance from some point of the space. In two dimensions, this means we can draw a circle round it (in three dimensions, a sphere, and so on).

Figure 1: Bounded and unbounded sets in two dimensions

Figure 1: Bounded and unbounded sets in two dimensions

bounded (function)

A function is bounded on its domain if all its values lie within a set finite distance of some particular value. In graphical terms, this means we can enclose the function's graph within parallel "tramlines''.

Figure 1: Bounded and unbounded functions on a finite domain

Figure 1: Bounded and unbounded functions on a finite domain

Cartesian equations When a structure in some n-dimensional space (a line, perhaps, or a surface) can be specified by means of an explicit equation linking the coordinate variables, this relationship is called the Cartesian equation. Some structures (such as lines in three dimensions) can only be specified by means of more than one Cartesian equation; in the case of lines in 3D, the relationship is usually of the form $$\frac{x-a_1}{d_1}=\frac{y-a_2}{d_2}=\frac{z-a_3}{d_3}.$$ Cartesian equations are not the only way of specifying lines or surfaces; there are also, for example, parametric, polar or vector equations.
changing of variable Certain classes of problem, such as integration or the solution of differential equations, may be difficult as stated but easier following a substitution, in which the variable in terms of which the problem is couched is replaced by one that makes the mathematics easier.
coefficient of friction

When two flat surfaces are in contact, the force between them consists of a normal reaction, N, at right angles to the surfaces, and a frictional reaction, F, parallel to them (see Figure 1). The frictional reaction exactly balances any force that is applied parallel to the surfaces; or rather, this happens up to a certain maximum value of the applied force, above which the surfaces slip. This maximum value is proportional to N, and also depends on how rough the surfaces are: the rougher the surfaces, the stronger the force that can be applied without them slipping.

We can say that F\le \mu N, where \mu, which is a measure of how rough the surfaces are, is called the coefficient of friction between them.

Figure 1: Two surfaces in frictional contact

Figure 1: Two surfaces in frictional contact

combinations The number of ways of selecting r distinct objects from a set of n, where all arrangements are treated as equivalent, is $$ ^nC_r = \frac{n!}{r!(n-r)!}.$$
common difference See arithmetic sequence, series.
common ratio See geometric sequence, series.
complex conjugate

The complex conjugate of the complex number $$z=x+i\,y$$ is defined to be $$\bar{z} = x-i\,y.$$ Note that z\,\bar{z} = x^2+y^2 is real, and equal to the square of z's modulus.

The non-real roots of real polynomials always occur as pairs of conjugate complex numbers.

complementary function The general solution of a linear differential equation $$a_0(x)+a_1(x)\,\frac{dy}{dx}+\dots+a_n\,\frac{d^ny}{dx^n}=g(x)$$ consists, in general, of two components: one that just reflects the intrinsic properties of the system and one that also reflects the way the system is driven. The former is known as the complementary function, and is equal to the general solution of the homogeneous equation $$a_0(x)+a_a(x)\,\frac{dy}{dx}+\dots+a_n\,\frac{d^ny}{dx^n}=0.$$ For example, the differential equation $$y''+5y'+6y=10\,\sin x$$ has general solution $$y=A\,e^{-2x}+B\,e^{-3x}-\cos x+\sin x.$$ The complementary function is $$y=A\,e^{-2x}+B\,e^{-3x},$$ which is the general solution of the homogeneous equation $$y''+5y'+6y=0.$$
direction cosines The cosine of the angle between the vector $${\bf a} = a_1\,{\bf i} +a_2\,{\bf j} +a_3\,{\bf k},$$ and the x-axis is equal to $$\frac{{\bf a}\cdot{\bf i}}{|{{\bf a}}| |{\bf i}|},$$ whose value is $$\frac{a_1}{\sqrt{{a_1}^2 +{a_2}^2+{a_3}^2}}.$$ Similarly, the cosine of the angle between {\bf a} and the y-axis is $$\frac{a_2}{\sqrt{{a_1}^2 +{a_2}^2+{a_3}^2}},$$ and that between {\bf a} and the z-axis is $$\frac{a_3}{\sqrt{{a_1}^2 +{a_2}^2+{a_3}^2}}.$$ These are known as {\bf a} 's direction cosines. Note that they correspond to the components of the unit vector parallel to {\bf a} .
derivative

Given a function f(x), its derivative, f'(x), is, in terms of x the gradient at the point (x, f(x)): the derivative is the "gradient function'', you might say.

For any small value of h, it is approximately given by $$\frac{f(x+h)-f(x)}{h},$$ with exact equality in the limit as h tends to zero. This is illustrated in Figure 1.

The idea of a derivative

Figure 1: The idea of a derivative

differential A differential is an object like dx or dy, or some combination of them such as y\,dx-x\,dy. It's hard (though not impossible) to make precise sense of what such objects might really mean, but they're nonetheless often used in calculations, as purely formal expressions.
differential equation A differential equation is a relationship between a quantity and its derivative or derivatives, such as $$\frac{d\theta}{dt} = -k\,(\theta-\theta_E)$$ or $$\frac{d^2x}{dt^2} = -\frac{G M}{x^2}.$$ Our theories and physical laws are often in the form of differential equations; usually, what we need to work with is, instead, an explicit relationship linking the variables, without any derivatives in. Getting such an explicit relationship out of a differential equation is called solving it.
domain Strictly, when defining a function, we must specify a set on which it is to operate: the set of possible "inputs'', if you like. This set is called the function's domain.
even function

An even function is a function f such that $$f(x) = f(-x)$$ for all x for which f is defined. Even functions have graphs with reflection symmetry about the y-axis: see Figure 1 below.

Figure 1: Graph of an even function

Figure 1: Graph of an even function

exact

The differential equation $$(x^2+2\,x\,y)\,\frac{dy}{dx}+2\,x\,y+y^2=0$$ looks hard to solve, but can be written as $$\frac{d}{dx}\,(x^2\,y+x\,y^2)=0,$$ meaning that its solution is simply $$x^2\,y+x\,y^2=k,\quad\mbox{constant}.$$ Whenever a differential equation can be written in this way, in the form $$\frac{d}{dx}(\mbox{expression}) = 0,$$ we say it is exact, and the left-hand-side is called an exact derivatives.

Exact differential equations are often written in terms of differentials instead of derivatives, as in $$(2\,x\,y+y^2)\,dx+(x^2+2\,x\,y)\,dy=0.$$

expansion 1 An expression that consists wholly or partly of products of bracketed subexpressions can be {\em expanded} by multiplying out these brackets, e.g. $$(2x+1)(x-1)(x+3)=2x^3+5x^2-4x-3.$$
expansion 2 Many functions can be expressed in the form of infinite series: this is called expansion. For example, $$\tan^{-1}x=x-\frac{x^3}{3}+\frac{x^5}{5}-\dots$$
exponential form If the complex number z has modulus r and argument \theta, then z is equal to $$r\,e^{i\,\theta},$$ and we call this the exponential form of z.
factorial If n is a positive integer, the factorial of n, written n!, is equal to $$n!=n(n-1)(n-2)\dots3\times2.$$ By convention, 0! is 1.
factorisation Certain polynomial expressions can be written as the product of bracketed subexpressions; a process known as factorisation. For example, $$2x^3+5x^2-4x-3=(2x+1)(x-1)(x+3).$$
first order See order
fourth order See order
frictional reaction See limiting friction.
geometric sequence, series

A geometric sequence (or series) is a sequence (or series) in which each term is obtained from the last by multiplication by a constant quantity (known as the common ratio). For example, $$1,-2,4,-8,16,\dots$$ is an arithmetic sequence and $$1-2+4-8+16+\dots$$ is the corresponding series: the common ratio is -2.

gradient

The gradient of a straight line is how far it rises for every 1 unit travelled horizontally: the gradient of the straight line through the point (x_1, y_1) and (x_2, y_2) is $$\frac{y_2-y_1}{x_2-x_1}.$$ This is shown in Figure 1.

The gradient of a curved line changes as you go along it. At any given point, it is equal to the gradient of the straight line that just touches the curve at that point, which is know as the tangent. This is shown in Figure 2.

Gradient of a straight line

Figure 1: Gradient of a straight line

Gradient of a curve at a point

Figure 2: Gradient of a curve at a point

Heaviside step function

The Heaviside step function, H(x), is given by $$H(x)= \left\{ \begin{array}{cc} 0,&x<0,\\ 1,&x\ge 0. \end{array} \right. $$

Figure 1: The Heaviside step function

Figure 1: The Heaviside step function

homogeneous 1 A first-order differential equation is called homogeneous if it can be expressed in the form $$\frac{dy}{dx} = f\,\left(\frac{y}{x}\right).$$ For example, the equation $$(x^2+xy)\,\frac{dy}{dx} = y^2$$ may be written in the form $$\frac{dy}{dx} = \frac{(y/x)^2}{1+(y/x)},$$ and is therefore homogeneous.
homogeneous 2 A linear differential equation of the form $$a_0(x)\,y+a_1(x)\,\frac{dy}{dx}+\dots+a_n(x)\,\frac{d^ny}{dx^2}=f(x)$$ is homogeneous if f(x)=0, and inhomogeneous otherwise.
hyperbolic functions

The hyperbolic cosine, \cosh x, is defined as follows: $$\cosh x = \frac{e^x+e^{-x}}{2}.$$ The hyperbolic sine, \sinh x, is defined as follows: $$\sinh x = \frac{e^x-e^{-x}}{2}.$$ The hyperbolic tangent, \tanh x, is defined as follows: $$\tanh x = \frac{\sinh x}{\cosh x} =\frac{e^x-e^{-x}}{e^x+e^{-x}} .$$

Figure 1: The hyperbolic cosine (red), sine (blue) and tangent (green)

Figure 1: The hyperbolic cosine (red), sine (blue) and tangent (green)

hyperbolic sine  
hyperbolic tangent  
implicit differentiation When the relationship between y and x is represented not as an explicit equation of the form $$y = f(x),$$ but as an implicit one of the form $$f(x,y) = 0,$$ the chain rule can be used to calculate an expression for the derivative, dy/dx, in terms of both x and y. This process is called implicit differentiation.
improper integral

An integral is improper if, for some reason, the region in the plane that it corresponds to is unbounded. This can happen in two ways: either the range of integration may be infinite, as in $$\int_1^{\infty}\frac{1}{x^2}\,dx,$$ or the function may have a singularity within or on the edge of the range of integration, as in $$\int_1^5\frac{1}{\sqrt{x-1}}\,dx.$$ Note that an unbounded region need not necessarily have infinite area, and if not, the integral will have a finite value (as both of these do).

Figure 1: Improper integral: unbounded function integrated over a finite domain

Figure 1: Improper integral: unbounded function integrated over a finite domain

Figure 2: Improper integral: bounded function integrated over an infinite domain

Figure 2: Improper integral: bounded function integrated over an infinite domain

index See indices
indices In an expression representing a power, such as $$3^5,$$ the number in the superscript position, in this case 5, is known as the index; plural indices.
inequality An inequality is a mathematical statement to the effect that one quantity is less then, or greater than, or at most, or at least, another: for example, $$x<7$$ or $$(x-1)^2\ge5.$$
inhomogeneous See homogeneous.
integrating factor The linear differential equation $$\frac{dy}{dx}+\frac{2}{x}\,y=\frac{1}{x}$$ is hard to solve as it stands, but if we multiply throughout by x^2 we obtain $$x^2\,\frac{dy}{dx}+2x\,y=x.$$ This equation is exact: it may be rewritten as $$\frac{d}{dx}\,(x^2\,y)=x,$$ and the solution is now easy to obtain. The term that we multiplied by, x^2, is known as the integrating factor. In general, the first-order linear differential equation $$\frac{dy}{dx}+p(x)\,y=q(x)$$ may be solved using an integrating factor I(x), where $$I(x)=e^{\int p(x)\,dx}.$$
integration by parts A technique for calculating the integral of a product by expressing it in terms of the integral of a related product, using the formula $$\int u\,\frac{dv}{dx}\,dx = u\,v-\int v\,\frac{du}{dx}\,dx.$$
intercept

The intercept of a straight line is the point where its graph crosses the y-axis. This is shown in Figure 1.

Intercept of a straight line 

Figue 1: Intercept of a straight line

inverse functions

Any function f that is one-to-one (that is, for which no two inputs give the same output) has an inverse f^{-1}, which represents f ``in reverse''. That is, if $$y=f(x),$$ then $$x=f^{-1}(y),$$ for all x on which f is defined. Functions that aren't one-to-one, such as sine or cosine, can sometimes be given an inverse by considering the function only over a restricted domain.

Figure 1: A function (blue) and its inverse (red)

Figure 1: A function (blue) and its inverse (red)

invertible  
Lagrangian interpolation

For any collection of n+1 data points $$(x_0,y_0),\,(x_1,y_1),\,(x_2,y_2),\dots,\,(x_n,y_n)$$ (as long as none share the same x-value) there is exactly one polynomial of degree n whose graph passes through all the points. This is given by the formula $$f(x) = \sum_{i=0}^n\,y_i\,\frac{(x-x_0)\dots(x-x_{i-1})(x-x_{i+1})\dots(x-x_n)}{(x_i-x_0)\dots(x_i-x_{i-1})(x_i-x_{i+1})\dots(x_i-x_n)}.$$ Finding this polynomial is called Lagrangian interpolation.

Figure 1: Lagrangian interpolation of a set of <span class='math'>8</span> data points by a polynomial

Figure 1: Lagrangian interpolation of a set of 8 data points by a polynomial
of degree 7

Laplace transform The Laplace transform of f(t) is given by $$F(s) = \int_0^{\infty}f(s) e^{-s\,t}\,dt.$$ The Laplace transform is unique---no two functions have the same Laplace transform---and the operations of calculus become operations of algebra when transformed. These properties allow Laplace transforms to be used to solve problems in calculus, such as solving differential equations.
limit

Consider a sequence $$a_1,\,a_2,\,a_3,\,\dots$$ Suppose that there's some number l that this sequence approaches as we take successive terms. (By this we mean something very precise, namely that we can specify a pair of values as close as we like to l, and be sure that eventually, all the terms of the sequence will lie between these values.)

We say that l is the limit of the sequence (a_n) as n tends to infinity.

We can use the same idea for functions of x: the limit of f(x) as x tends to infinity is l provided we can specify a pair of values as close as we like to l, and be sure that eventually, for large enough x, all values of f(x) lie within these values.

The same idea works for x tending to negative infinity, and we can also extend it to finite values of x. If we can specify a pair of values as close as we like to l, and be sure that as long as x lies close enough to a then f(x) lies between these values, then f(x) tends to the limit l and x tends to a.

Figure 1: A sequence apparently tending to a limit <span class='math'>l</span>

Figure 1: A sequence apparently tending to a limit l

limiting friction

When two flat surfaces are in contact, the force between them consists of a normal reaction, N, at right angles to the surfaces, and a frictional reaction, F, parallel to them (see Figure 1).

If the coefficient of friction between the surfaces is \mu, then F\le \mu N. If the surfaces are slipping against each other, F is always equal to \mu N.

If the surfaces are static with respect to each other, but F = \mu N, then the surfaces are on the point of slipping. This set of circumstances is called limiting friction.

Limiting friction

Figure 1: Two surfaces in frictional contact

L'Hôpital's Rule If f(x) and g(x) both tend to 0 as x tends to a, then $$\lim_{x\to a}\,\frac{f(x)}{g(x)} = \lim_{x\to a}\,\frac{f'(x)}{g'(x)}.$$
linear (differential equation) A differential equation is linear if it is, or can be placed, in the form $$a_0(x)\,y+a_1(x)\,\frac{dy}{dx}+\dots+a_n(x)\,\frac{d^ny}{dx^n}=f(x).$$ Linear differential equations model systems in which the response is proportional to the stimulus.
logarithmic function

The inverse of an exponential function: the statement $$y = \log_a x$$ means exactly the same thing as $$x = a^y.$$ The inverse of the exponential function, e^x, is known as the natural logarithm, written \ln x (or sometimes just \log x). Its graph is shown in Figure 1.

Graph of the natural logarithm

Figure 1: Graph of the natural logarithm

Maclaurin series Given a function f that is smooth at x=0, f(x) can be expressed as an infinite series in x: $$f(x) = f(0) + x\, f'(0) + \frac{x^2}{2!}\,f''(0) + \dots + \frac{x^n}{n!} f^{(n)}(0) + \dots$$ This is called f's Maclaurin series.
magnitude A vector consists of two quantities: a direction, and a size, known as the magnitude. If we represent a vector as an arrow, the magnitude is usually represented by the arrow's length.
maximum A maximum of a function f is a point that is the highest in its immediate neighbourhood. If f is a function of one variable, then df/dx is zero at a maximum; if f is a function of more than one variable, then all its partial derivatives are zero at a maximum.

A maximum of a function of two variables corresponds, in a surface plot of f, to the top of a hill.

method of undetermined coefficients A method of finding the particular integral of a linear differential equation with constant coefficients. A "trial'' solution is proposed, (usually) from the same family as the driving function (that is, the equation's right-hand side), but containing unknown constants whose values are found by substituting the trial solution into the differential equation.
minimum

A minimum of a function f is a point that is the lowest in its immediate neighbourhood. If f is a function of one variable, then df/dx is zero at a minimum; if f is a function of more than one variable, then all its partial derivatives are zero at a minimum.

A minimum of a function of two variables corresponds, in a surface plot of f, to the bottom of a depression.

modulus

The modulus of a real number x, also called its absolute value, is written |x| and defined by $$|x| = \left\{ \begin{array}{cc} x,&x\ge 0,\\ -x,&x<0. \end{array}\right.$$

The graph of |x| is shown in Figure 1.

Graph of the modulus function

Figure 1: Graph of the modulus function

monkey saddle

A monkey saddle is an example of a stationary point of a function of two variables that doesn't fall into any of the usual categories, being neither a maximum, nor a minimum, nor an ordinary (simple) saddle point. Such stationary points are rare, and correspond to a value of exactly zero for the test function $$z_{xx}\,z_{yy}-{z_{xy}}^2.$$ An example is the point at the origin in the surface plot of $$z=x^3-3\,x\,y^2,$$ shown in the figure.

Figure 1: A monkey saddle: the point <span class='math'>(0,0,0)</span> on the surface plot <span class='math'>z=x^3-3\,x\,y^2</span>

Figure 1: A monkey saddle: the point (0,0,0) on the surface plot z=x^3-3\,x\,y^2

natural logarithm

The inverse of the exponential function, e^x; the natural logarithm is written \ln x and is defined by stating that $$y=\ln x$$ if and only if $$x=e^y.$$

Figure 1: Plot of the exponential function <span class='math'>y=e^x</span> (blue) and its inverse,

Figure 1: Plot of the exponential function y=e^x (blue) and its inverse,
the natural logarithm y=\ln x

normal reaction See limiting friction
odd function

An odd function is a function f such that $$f(x) = -f(-x)$$ for all x for which f is defined. Odd functions have graphs with rotation symmetry about the origin: see Figure 1.

Graph of an odd function

Figure 1: Graph of an odd function

order 1 The differential equation $$\frac{d\theta}{dt}=-k\,(\theta-\theta_E)$$ is first order, because the highest derivative it contains is the first. The differential equation $$\frac{d^2x}{dt^2}-2\frac{dx}{dt}+x=\sin t$$ is second order, because the highest derivative it contains is the second; and so on.
order 2 The order of a numerical method describes how the error behaves as we decrease the step size. If the error is, is in the limit, proportional to the step size, the method is first order; if it is proportional to the square of the step size, the method is second order; and so on.
ordinate A y-value.
Osborn's rule

A method for converting trigonometrical identities to hyperbolic identities:

  1. Convert every trigonometrical function to the corresponding hyperbolic function.

  2. Change the sign of each term containing the product of two (hyperbolic) sines.

Thus for example $$\cos2\theta=\cos^2\theta-\sin^2\theta$$ becomes $$\cosh2x=\cosh^2x+\sinh^2x.$$

partial sum The nth partial sum of the series $$\sum_{r=1}^{\infty}\,a_r$$ is $$\sum_{r=1}^n\,a_r.$$ The partial sums of a series form a sequence.
particular integral

The general solution of a linear differential equation $$a_0(x)+a_1(x)\,\frac{dy}{dx}+\dots+a_n\,\frac{d^ny}{dx^n}=g(x)$$ consists, in general, of two components: one that just reflects the intrinsic properties of the system and one that also reflects the way the system is driven. The latter is known as the particular integral. {\em Any} solution of the differential equation can serve as a particular integral, but the simplest is usually chosen.

If the coefficients a_0, a_1 etc are constants rather than functions of x, then the particular integral is often closely related to the driving function g(x).

Pascal's triangle

Pascal's triangle is a quick way of working out the coefficients in the binomial expansion of (a+b) for smallest positive integer n. The first line of the triangle is $$1\quad\quad1$$ and each subsequent line is generated by calculating the sums of neighbouring elements of the line above (and then putting a 1 on each end). The first seven lines of Pascal's triangle are

$$\begin{array}{ccccccccccccccc} &&&&&&1&&1&&&&&&\\ &&&&&1&&2&&1&&&&&\\ &&&&1&&3&&3&&1&&&&\\ &&&1&&4&&6&&4&&1&&&\\ &&1&&5&&10&&10&&5&&1&&\\ &1&&6&&15&&20&&15&&6&&1&\\ 1&&7&&21&&35&&35&&21&&7&&1 \end{array}$$

and thus, for example, $$(1+x)^5 = 1+5x+10x^2+10x^3+5x^4+x^5.$$

permutations The number of ways of selecting and arranging r distinct objects from a set of n is $$ ^nP_r = \frac{n!}{(n-r)!}.$$
point of inflexion

A point of inflexion of a function f(x) is a point at which its gradient stops falling and starts rising, or vice versa. At any point of inflexion, f''(x) = 0 (but be careful: this can also occur at maxima or minima). The idea is illustrated in Figure 1. Note that the rightmost point of inflexion is also a stationary point, because the gradient there happens to be zero.Note, though, that neither of the other two points of inflexion are stationary points.

Note: one way to think about points of inflexion is to imagine driving a car along a road shaped like the curve. Some of the time, you're steering left, and some of the time right. Points of inflexion correspond to your steering wheel being, for that instant, exactly centred, as you cross from right steer to left steer or vice versa.

Points of inflexion

Figure 1: Points of inflexion

position vector

The position vector of a point A is the vector OA, where O is the origin: see Figure 1

Position vector

Figure 1: OA, the position vector of the point A

prime factors Every whole number greater than 1 can be written in exactly one way as the product of prime factors. For example, $$588 = 2\times2\times3\times7\times7.$$
prime number A prime number is a positive number with exactly two factors: itself and 1. (Note that 1 itself, which has just one factor, is not generally regarded as prime.) Numbers with more than two factors, which can be written as a product in a variety of different ways, are called composite.
principal value

A simple trigonometrical equation such as $$\sin x = 0.65$$ has infinitely many solutions, because the graph of y = \sin x is periodic. One of these solutions is treated as the principal one, namely the one that happens to lie between -\pi/2 and \pi/2. This is shown in Figure 1. It is this principal value that we use when calculating \sin^{-1} 0.65.

Principal solutions of sin equations

Figure 1: Principal solutions of sine equations

When the equation involves a cosine instead of a sine, the principal value lies between 0 and \pi; when a tangent, -\pi/2 and \pi/2 again. These are shown in Figures 2 and 3 respectively.

Principal solutions of cosine equations

Figure 2: Principal solutions of cosine equations

Principal solutions of tangent equations

Figure 3: Principal solutions of tangent equations

quadratic A quadratic function is anything of the form $$f(x) = ax^2+bx+c,$$ where a, b and c are constants.
radians

The "natural'' measure of angle, used because it makes calculus with trigonometrical functions much more straightforward; for example, if x is measured in radians then $$\frac{d}{dx} (\sin x) = \cos x.$$ The angle in radians is equal to "arc length over radius''. There are 2\pi radians in a full circle.

Figure 1: One radian

Figure 1: One radian

radius of convergence

Maclaurin series generally converge either for all x or for -r< x< r, for some r. In the latter case, we say the radius of convergence is r; in the former, we say it is infinite. (Taylor expansions about x=a work in a similar way; either they converge for all x or they converge for a-r< x< a+r.)

The radius of convergence can be calculated by applying the ratio test: it is given by the range of x-values for which the ratio of successive terms is numerically less than 1.

Figure 1: Plot of <span class='math'>\arctan x</span> (blue) together with successive Maclaurin approxomations

Figure 1: Plot of \arctan x (blue) together with successive Maclaurin approxomations
(darkening shades of green), showing convergence only within -1<x<1

range

Strictly, when defining a function, we must specify a set to which its values all belong: a set of possible "outputs''. This set is called the function's range.

Note:the terminology can get ambiguous here. Some people say that
a function's range must be precisely all its possible values, whereas
others call this set the "image'', and allow the range to be any set within
which the image is contained.

rational expression See rational function.
rational functions A rational function is a function of the form $$\frac{p(x)}{q(x)},$$ where p and q are polynomials: for example $$\frac{x^2-3x+1}{2x^3-3}.$$
rationalisation When a fraction contains a surd in the denominator, it is always possible to express it, equivalently, with the surd moved to the numerator. For example $$\frac{2}{\sqrt{7}} = \frac{2}{\sqrt{7}} \times \frac{\sqrt{7}}{\sqrt{7}} = \frac{2\sqrt{7}}{7}.$$ This process is called rationalisation of the denominator.
ratio test

In the case of the series $$\sum_{r=1}^{\infty} \,a_r,$$ if the limit $$\lim_{r\to\infty}\,\left|\frac{a_{r+1}}{a_r}\right|$$ exists and is less than 1, then the series converges; if it exists and is greater than 1, then the series diverges; if the fraction $$\left|\frac{a_{r+1}}{a_r}\right|$$ tends to infinity, then the series diverges.

Note that if this fraction either (i) fails to converge while not tending to infinity, or (ii) converges to 1 exactly, then the ratio test tells us nothing about whether or not the series converges.

residuals See best fit curve
Richardson extrapolation

In general, a way of using two estimates for a quantity, calculated using different step sizes, to obtain a third estimate that we can expect to be better than either, using what we know about the estimates' order of convergence (see order (2)).

In the case of the trapezium rule, given two estimates T_m and T_n, calculated using m intervals and n intervals respectively, the Richardson extrapolation is $$\frac{n^2\,T_n-m^2\,T_m}{n^2-m^2}.$$ In the case of Simpson's rule, if the two estimates are S_m and S_n, then the Richardson extrapolation is $$\frac{n^4\,S_n-m^4\,S_m}{n^4-m^4}.$$

right handed system The vectors {\bf a}, {\bf b} and {\bf c} form a right-handed system if turning a screwdriver from {\bf a} to {\bf b} would drive a screw in a direction corresponding to {\bf c}.
separable variables A first order differential equation has separable variables if it is, or can be written, in the form $$\frac{dy}{dx} = f(x)\,g(y).$$ Such differential equations can be solved by performing the integrals on either side of the equation $$\int\frac{1}{g(y)}\,dy=\int f(x)\,dx.$$
sequence A sequence is a list of numbers that has a beginning, but that is in general of infinite length, for example $$1,1,2,3,5,\dots$$
series A series is what we obtain by adding the terms of a sequence, for example $$1+1+2+3+5+\dots$$ More strictly, it is the sequence of partial sums obtained by adding finite runs. If $$1,1,2,3,5,\dots$$ is our sequence, then $$1, 1+1, 1+1+2, 1+1+2+3, \dots$$ is the corresponding series.
shift theorems For Laplace transforms. If $$\mathcal{L}[f(t)] = F(s),$$ then $$\mathcal{L}[e^{a\,t}\,f(t)] = F(s-a)$$ and $$\mathcal{L}[H(t-a)\,f(t-a)]=e^{-a\,s}\,F(s),$$ where H is the Heaviside step function.
simple saddle point

A (simple) saddle point is a stationary point of a function of two variables that is a minimum along some paths and a maximum along others. On a surface plot, it corresponds to a pass between two hills, or to a the shape of a horse-riding saddle, or to a Pringle crisp.

Figure 1: A simple saddle: the point <span class='math'>(0,0,0)</span> on the surface plot <span class='math'>z=x^2-y^2</span>

Figure 1: A simple saddle: the point (0,0,0) on the surface plot z=x^2-y^2

Simpson's rule

A numerical method for calculating the approximate value of an integral, by approximating the integrand by a sequence of quadratic polynomials. We sample the integrand at n+1 points, $$y_0,\,y_1,\,\dots,y_n,$$ where n is even; these correspond to x-values evenly spaced a distance h apart. Then the integral is given approximately by $$\frac{h}{3}\,(y_0+4\,y_1+2\,y_2+4\,y_3+2\,y_4+\dots+2\,y_{n-2}+4\,y_{n-1}+y_n).$$

Figure 1: Simpsons's rule: numerical integration by approximating an integrand

Figure 1: Simpsons's rule: numerical integration by approximating an integrand
(blue curve) with a sequence of quadratic functions (three red curves)

singularities A point where a function is undefined, and close to which the function tends to plus or minus infinity.
skew

Two lines in three dimensions are skew if they are neither parallel nor intersecting: that is, if they "miss'' one another.

Fgiure 1: Two skew lines and the line joining their points of closest approach

stationary points

A stationary point of a function f(x) is a point at which its graph is locally horizontal: that is, a point at which f'(x) = 0. The idea is illustrated in Figure 1, which shows the three types that exist: from left to right, a maximum, and minimum and a stationary point of inflexion.

Stationary points: maximum, minimum and stationary point of inflexion

Figure 1: Stationary points: maximum, minimum and stationary point of inflexion

substitution See change of variable.
sum of squares See best fit curve.
sum to infinity Some series have the property that their partial sums get closer and closer to a certain value as we take more and more terms. For example, the series $$1+\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\dots$$ gets closer and closer to the value 2. When this is the case, we say that the series has a sum to infinity, and write (in our example) $$1+\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\dots=\sum_{n=0}^\infty\frac{1}{2^n} = 2.$$
surd A surd is an unevaluated square root, such as \sqrt{5}, or more generally an unevaluated nth root, such as \sqrt[5]{12}. Surds are used because they're exact: when you evaluate an irrational root as a decimal, you always lose precision.
surface plot

A function z=f(x,y) visualised as a surface in three dimensions: the coordinates x and y are treated like map references, and z then specifies a height. Mathematical surfaces resemble physical landscapes, with features like hills, depressions and mountain passes.

Figure 1:

Figure 1: Surface plot of the function z=(x^2-y^2)\,e^{-x^2-y^2}

tangent In two dimensions, a straight line that just touches a curve at a point. In three, a plane that just touches a surface at a point.
Taylor series Given a function f that is smooth at x=a, f(x) can be expressed as an infinite series on (x-a): $$f(x) = f(a) + (x-a)\,f'(a) + \frac{(x-1)^2}{2!} f''(a) + \dots + \frac{(x-a)^n}{n!} f^{(n)}(a) + \dots$$ This is called f's Taylor series at a. The Taylor series at 0 is called the Maclaurin series.
trapezium rule

A numerical method for calculating the approximate value of an integral, by approximating the integrand by a sequence of linear functions. We sample the integrand at n+1 points, $$y_0,\,y_1,\,\dots,y_n;$$ these correspond to x-values evenly spaced a distance h apart. Then the integral is given approximately by $$\frac{h}{2}\,(y_0+2\,y_1+2\,y_2+\dots+2\,y_{n-2}+2\,y_{n-1}+y_n).$$

Figure 1: Trapezium rule: numerical integration by approximating an integrand

Figure 1: Trapezium rule: numerical integration by approximating an integrand
(blue curve) with a sequence of linear functions (six red line segments)

unbounded See bounded
unit vector A vector of magnitude 1.
vector equation

Certain structures in n-dimensional space (lines and planes, for example) can be specified by means of an equation involving the position vector of points that they contain.

The vector equation of a line is parametric, and is of the form $${\bf r} = {\bf a} + t {\bf d}.$$ In three dimensions, the vector equation of a plane is of the form $${\bf r}\cdot{\bf n}=h.$$