Next Previous Contents

Notation and Definitions

Topological Space
A set of points and a collection of subsets (meeting certain restrictions) which are defined to be the open subsets of the space.
Hausdorff Space
A topological space in which, for every pair of nonidentical points p and q, there exist two open sets P and Q containing them, that do not intersect (that is, whose intersection is empty). It is a major theorem that every Hausdorff space is a metric space, that is, it has a distance function (and vice versa, an easy theorem). As the most familiar example, if a distance function r(p,q) is already known to exist on the space, and r(p,q) = d, then a suitable choice for P and Q (not the only one) is to let P be the open disc of radius d/3 centered on p, and similarly for Q, and clearly they don't intersect since there is a gap of d/3 between them.
Distance Function or Metric
A real valued function with two points as arguments. It is zero when the points are equal, is positive when they aren't, and is symmetric. A distance function must satisfy the triangle inequality: r(a,c) <= r(a,b) + r(b,c) for any triplet of points. The Minkowski ``metric'' is not a distance function because it is negative for tachyonic separations and it does not satisfy the triangle inequality.
Metric Space
A topological space on which a distance function is defined. It is a major theorem that every Hausdorff space is a metric space (and conversely, which is an easy theorem).
Riemannian Manifold
A manifold is a Hausdorff space, with a countable basis of open sets, such that a neighborhood of every point (some open set containing the point) is homeomorphic to Rn, the n-dimensional Euclidean space, for some n. A Riemannian manifold is a manifold having a positive definite inner product defined for tangent vectors (which is called the metric tensor). It is a major theorem that every metric space is a Riemannian manifold. The converse is easier to prove.
Differentiation
For functions f(x1) or r(x1, x2) of one or multiple points respectively, this notation is used:
f[!i]
The ordinary derivative of f in the direction of coordinate number i.
f[/i]
The covariant derivative of f in direction i, defined below in terms of the Christoffel symbols.
r[1!i] or r[2!j]
The ordinary derivative of r in direction i of the first argument, or direction j of the second.
r[1/i] or r[2/j]
Similarly for covariant derivatives.
Covariant and Contravariant Tensor
Suppose we have two manifolds X and Y, and we make a map function y(x) that produces a point in Y for each point in X, and we represent y(x) in coordinates. The ``Jacobian'' matrix J is the derivative of that map: J[i][j] = y[i][!j]. (See above for the notation [!j].)

Given a scalar function f(y), the gradient df = f[!i], a 1-form. Define a related function g(x) = f(y(x)). What is the gradient of g? By the chain rule:

        dg[j] = sum (i = 1..n) f[!i] J[i][j].  
    
A ``covariant tensor'' is an object whose coordinate representation is transformed similarly to a gradient (going from Y to X), by matrix multiplying it (going from Y to X) by the Jacobian of the map function. A ``contravariant tensor'' is transformed by multiplying it by the inverse of the Jacobian; vectors are typical.
Summation Convention
When covariant and contravariant objects appear in the same term they almost always are summed as in the above expression for dg[j]. When the same subscript letter is repeated in a covariant and a contravariant position, summation is assumed but the sum symbol is not actually written.
Christoffel Symbols
How do you transform the derivative of a tensor? It isn't a tensor; the derivatives of the Jacobian get mixed in. However, the ordinary derivative of a tensor can be separated into two parts; one is a tensor called the ``covariant derivative'' and the other is a matrix-type product of the original object with a set of functions called the ``Christoffel symbols''. Let g[i][j] represent the metric tensor and g'[i][j] represent its matrix inverse. The definitions of the two kinds of Christoffel symbols are:
        [ij,k] = 1/2 (g[i][k][!j] + g[j][k][!i] - g[i][j][!k])
        { k }
        {i j} = g'[k][m] [ij,m]
    
In this document the notation C[i][j][k] is used in place of [ij,k]. The ordinary derivative of a 1-form q, of which a gradient is a typical example, is split up like this:
        q[i][!j] = q[i][/j] + q[m] g'[m][k] [ij,k]
    
More normally one sees the covariant derivative defined in terms of the ordinary derivative (which is calculated by ordinary means):
        q[i][/j] = q[i][!j] - q[m] g'[m][k] [ij,k]
    
The analogous formula if q is contravariant is:
        q[i][/j] = q[i][!j] + q[m] g'[i][k] [mj,k]
    
Vector
Consider a curve c(t) on the manifold. For any function f(p) on the manifold, you can get d/dt f(c(t)), the directional derivative. Focus on one point q at a time. Frequently different curves through q will produce the same directional derivative, no matter which function is differentiated -- if the curves are tangent. Thus make equivalence classes among curves (through q). These equivalence classes are called vectors (at q). They form a vector space.
p-Vector
An ordered list of p (in 0 to n) vectors is referred to as a p-vector; it defines an element of area (p = n-1) or volume (p = n). The vectors in a p-vector are combined antisymmetrically with the Grassman product such that v^w = -w^v. The ``cross'' product of vectors is the Grassman product in 3 dimensions.
Form
A linear functional on p-vectors is called a p-form. While the value of a form (when a vector is fed to it) is typically a scalar, a composite object (form or vector or matrix) could also be the value. Forms can also be combined with the Grassman product. Any function f (defined at point q) defines a 1-form at q, the gradient of f, wherein the argument vector takes the directional derivative of the function, and the coordinate functions define a basis of all the 1-forms, notated dx[i] (i = 1..n). For calculation, suppose the components of a form A are A[i] and a vector v is v[i]. Then to compute, the functional value A(v) = sum (i = 1..n) A[i]v[i]. This is often abbreviated A.v
Metric Dual (represented by *)
For a p-form A, suppose you take its metric product with another p-form B. Point by point, this is a linear functional of p-forms, that is, it is a p-vector. Suppose you wished to integrate the function value relative to the volume element in the space. An alternative way to express the kernel of the integral would be to obtain a (n-p)-form *A and take its Grassman product with B. This is the metric dual of A. It has a simple form: it has the same components (some changing signs) multiplied by G, the positive square root of the determinant of the metric tensor. (It is unclear what the metric dual would be if the determinant were negative, for an illegal ``distance function''.) On an even dimension space, ** (metric dual twice) reverses the sign of odd degree forms but is an identity transformation for even degree forms. In odd dimensions it also either is the identity or reverses the sign, but it's more complicated to determine for which degrees this happens.
Current
A linear functional on fields of forms. One example of a current is a shape (volume, area, etc.), the form being integrated over it. The various differentiation operators are also currents. It's a theorem that every current T can be realized as the limit of a sequence of C-infinity forms A(j), where (B being the argument form field) T(B) = integral of *A(j) ^ B.
ð (Co-derivative)
ð = -*d* on a space of even dimension. When the dimension is odd, the sign is negative only for forms of odd degree.
Ð (Laplacian)
>Ð = (dð + ðd). This is the ``generalized Laplacian''. Green's function is the kernel of its inverse.
Harmonic form
A form H for which ÐH = 0.
Compact
For a set S to be compact means that every subset of S with infinitely many points has (at least one) limit point in S. (Corollary: every limit point is in S.) A subset of a metric space is compact if and only if it is closed and bounded (from the Heine-Borel theorem). A function has compact support if the set of points where it is nonzero is compact.

Next Previous Contents