Notation and Definitions
- Differentiation
- For functions f(x1) or r(x1, x2) of one or multiple
points respectively:
- f[!i]
- The ordinary derivative of f in the direction of coordinate
number i.
- f[/i]
- The covariant derivative of f in direction i. It would differ
from the ordinary derivative only if f had a composite value, i.e.
vector or matrix valued rather than scalar.
- r[1!i] or r[2!j]
- The ordinary derivative of r in direction i of the first argument,
or direction j of the second.
- r[1/i] or r[2/j]
- Similarly for covariant derivatives.
- Vector
- Consider a curve c(t) on the manifold. For any function f(p) on the
manifold, you can get d/dt f(c(t)), the directional derivative.
Focus on one point q at a time. Frequently different curves through q
will produce the same directional derivative, no matter which function is
differentiated -- if the curves are tangent.
Thus make equivalence classes among curves (through q).
These equivalence classes are called vectors (at q).
They form a vector space.
- p-Vector
- A set of p (in 0 to n)
vectors is referred to as a p-vector; it defines an element of area (p =
n-1) or volume (p = n). The vectors in a p-vector are combined
antisymmetrically with the Grassman product such that v^w = -w^v.
- Form
- A linear functional on p-vectors is called a p-form. While the value of
a form (when a vector is fed to it) is typically a scalar, a composite
object (form or vector or matrix) could also be the value. Forms can also
be combined with the Grassman product. Any function f (defined at point q)
defines a 1-form at q, the gradient of f, wherein the argument vector takes
the directional derivative of the function, and the coordinate functions
define a basis of all the 1-forms, notated dx[i] (i = 1..n). For
calculation, suppose the components of a form A are A[i] and a vector v is
v[i]. Then to compute, the functional value A(v) = sum (i = 1..n)
A[i]v[i]. This is often abbreviated A.v
- Covariant and Contravariant
- When a map y(x) from one manifold to another is represented by
coordinates, the Jacobian J[i][j] = y[i][!j]. Given a function f(y)
and define a new function g(x) = f(y(x)).
The gradient df = f[!i], a 1-form.
What is the gradient of g? By the chain rule
dg[j] = sum (i = 1..n) f[!i] J[i][j].
1-forms and any composite objects that transform
similarly are called ``covariant''. Objects that transform through the
inverse of the Jacobian, such as vectors, are called ``contravariant''.
- Summation Convention
- When covariant and contravariant objects appear in the same term
they almost always are summed as in the above expression for dg[j].
When the same subscript letter is repeated in a covariant and a
contravariant position, summation is assumed but the sum symbol is
not actually written.
- Metric Dual (represented by *)
- For a p-form A, suppose you take its metric product with another p-form
B. Point by point, this is a linear functional of p-forms, that is, it is
a p-vector. Suppose you wished to integrate the function value relative to
the volume element in the space. An alternative way to express the kernel
of the integral would be to obtain a (n-p)-form *A and take its Grassman
product with B. This is the metric dual of A. It has a simple form: it
has the same components (some changing signs) multiplied by G, the positive
square root of the determinant of the metric tensor. (It is unclear what
the metric dual would be if the determinant were negative.) On an even
dimension space, ** (metric dual twice) reverses the sign of odd degree
forms but is an identity transformation for even degree forms. In odd
dimensions it also either is the identity or reverses the sign, but it's
more complicated to determine for which degrees this happens.
- Current
- A linear functional on fields of forms. One example of a current is a
shape (volume, area, etc.), the form being integrated over it. It's a
theorem that every current T can be realized as the limit of a sequence of
C-infinity forms A(j), where (B being the argument form field) T(B) =
integral of *A(j) ^ B.
- ð =
- -*d* on a space of even dimension. When the dimension is odd, the sign
is negative only for forms of odd degree.
- Ð =
- (dð + ðd). This is the "generalized Laplacian" and Green's function
is the kernel of its inverse.
- Harmonic form
- A form for which ÐH = 0.
- Compact
- For a set S to be compact means that every subset of S with infinitely
many points has (at least one) limit point in S. (Corollary: every limit
point is in S.) A subset of a metric space is compact if and only if it is
closed and bounded (from the Heine-Borel theorem). A function has compact
support if the set of points where it is nonzero is compact.
Kodaira's Theorem
A theory of Fredholm, extended (?) by De Rham (th. 22): On a compact
Riemannian manifold, there are only a finite number of linearly independent
harmonic forms. If a form B is prespecified, the equation Ð U = B has a
solution if and only if B is orthogonal to all the harmonic forms
(specifically, to each basis form). If B has compact support, that's
sufficient for this theorem to apply even on a noncompact manifold. U is not
unique; you can add any harmonic form; but there is a unique component
orthogonal to all harmonic forms.
Kodaira's theorem (th. 24): On a Riemannian manifold, any square integrable
current A (a limit of a sequence of forms) can be uniquely decomposed as A = A1
+ A2 + A3, where A1 is in the metric completion of the set of all forms
compactly homologous to zero (i.e. there exists Z1 with compact support such
that dZ1 = A1, or at least A1 is the limit of a sequence of such forms); A2 is
in the metric completion of the set of forms compactly cohomologous to 0 (there
exists Z2 with compact support such that ðZ2 = A2, or A2 is the limit of such);
and A3 is a harmonic current. (If A3 has compact support, it's a theorem that
dA3 = 0 and ðA3 = 0, but this isn't true if A3 doesn't have compact support.
The square integrability of A may or may not preclude a harmonic component with
noncompact support.)
A generally similar theorem (th. 25) holds for currents continuous in the
mean at infinity. (Meaning: let f[i] be one of a set [i] of forms that are
C-infinity and have compact support and (f[i],f[i]) (metric product) is bounded
over [i]. A current A is "continuous in the mean at infinity" if
A[f[i]] (the functional action or integral) is bounded over [i] for any such
set of forms.)
For computing the decomposition: First take the metric product of A with
each of a basis of harmonic forms, suitably normalized, and assemble a linear
combination thereof having the same metric products; that's A3, and (A - A3) is
orthogonal to all harmonic forms. Now integrate Green's function acting on (A
- A3) to solve the Fredholm equation ÐU = A. Then A1 = d(ðU) and A2 = ð(dU).
On Rn, Green's function is 1/r(n-1) where r is the distance between the
argument points.
Kodaira's theorem and friends were not proved for double currents,
matrix-valued currents, etc; only for scalar-valued currents (electric, for
example, which is why De Rham called them "currents"). This whole
approach, however, assumes that an analog can be produced for matrix-valued
currents.