Next Previous Contents

Program for the Equations of Motion

For actual computation the various fields are represented numerically. One possibility is to populate a region of the 7-D slice with points, and compute C and V at each point. But this approach is inefficient and the results are hard to comprehend. Instead, the functions will be expanded in basis functions (typically orthogonal) with the series truncated at some complexity limit. If the basis functions are reasonably centered on the nonzero area, an arbitrary function can be represented to a given accuracy with (approximately) the same amount of information in the pointwise and basis function method, and there is a tendency for physically interesting functions to have simple basis function representations, particularly if the basis functions are chosen to match the physical situation.

The fields will be resolved into components aligned on pseudo-Euclidean coordinates. Each component of each field will be represented by the same scheme. The number of components is:
Christoffel symbols: N2 (N+1)/2 288
Velocity field: N 8
Integrated center point: N 8
Total: N2 (N+5)/2 304

It will not be necessary to store spatial derivatives since these can be computed easily from the basis functions, but it will be necessary to store a history of time derivatives for use by the numerical integrator.

One scheme of basis functions is to split up the space into slices on each coordinate, nonoverlapping but filling the space except for a set of measure 0 on the boundaries. These "slice and dice" cubic functions correspond with a lattice of of points, and they are in fact orthogonal if scaled by the width of each interval. And in fact, averaging over cubes gives better convergence than using the value of the function at the cube's center, as is seen in Simpson's rule for integration. For a cubical lattice, this many points are needed:
Per side m 2 3 4 5 6
Total m7 128 2187 16384 78125 279936

Physical situations are known to be well represented by spherical-type basis functions. In this scheme, the following basis functions are multiplied together. For each basis an estimate is made of the number of parameters and the number of coefficients in that dimension.

  1. Master coefficient. The coefficients of the basis functions are arbitrary to some degree, in that multiplying one and dividing another by the same amount has no effect on the result. But each function has a natural normalization, and will be assumed to have that magnitude, and the master coefficient will multiply their product.
  2. Energy coordinate. Fourier transform, sin plus cos, with integral harmonics from zero to some bound. Parameter: phase origin. Points: 5 (sin(0*theta) == 0).
  3. Momentum coordinates. Fourier transform, sin plus cos, with half-integral harmonics from 1/2, 3/2, ... to some bound. Parameters: phase origin for each. Points: 6. Beware: this may not be appropriate for the assumed hyphersphere topology.
  4. Space radial coordinate. Probably Bessel functions scaled by exp(-(r/a)2/2) where r = sqrt((x-x0)2+(y-y0)2+(z-z0)2). Parameters: the scale factor a and the center point. Points: 6.
  5. Space angular coordinates. Spherical harmonics. Parameters: Euler angles. Points: 5 x 6.

Estimated total coefficients: 194400 (similar to 67 lattice).

Let the basis functions be designated b[ij] where i designates the function category (e.g. energy Fourier transform or space radial), and j designates which level (harmonic, Bessel function). The definition of orthogonal functions is that

    int (all space) b[ij] b[kl] = (ij == kl)

That is, if i != k or j != l the integral is 0, while if they are equal the integral is 1.

To represent a function f: Let

    F[ij] = int (all space) f b[ij]
    f* = sum[ij] F[ij] b[ij]
    F[ij] = int (all space) f* b[ij]

In other words, f* (the sum) has the same orthogonal components as f does. It can be proved that as the number of terms increases, (f-f*) converges to zero, or f* converges to f.

In formulas we often get products like fg (two functions). What is its orthogonal expansion?

    f*g* converges to fg
    f*g* = (sum[ij] F[ij] b[ij]) (sum[kl] G[kl] b[kl]) 
    	= sum[mn] (sum[ij] (sum[kl] F[ij] G[kl] B[ijkl][mn])) b[mn]

where

    B[ijkl][mn] = int b[ij] b[kl] b[mn]

which is the orthogonal expansion of b[ij]b[kl]. Most orthogonal functions have simple product rules, e.g. sin(jx)cos(lx) = 1/2 (sin((j+l)x)+sin((j-l)x)). But when the families are different (i != k) then the integration over one coordinate, let's say l, can be done first, giving (kl == mn), and then the integration over ij can be done, which may or may not be zero.

When solving the equations of motion, we find the trajectory of the solution as a function of some parameter t which in the present case is equal to the time coordinate only because of a detail of construction. So what is the derivative of an orthogonal coefficient by the parameter t? Looking at one component f, suppose we know its value at t and t+d, and at both ends we compute the orthogonal expansion getting F(t)[ij] and F(t+d)[ij]. Then

    (d/dt)(F(t)[ij]) = lim (d->0) (F(t+d)[ij] - F(t)[ij])/d

But for a reasonably continuous f, integration, summation, subtraction and limits can be done in any order, so

    (d/dt)(F(t)[ij]) = int (d/dt)(f) B[ij]

(d/dt)(f) can be computed as a sum of products of other field components (e.g. covariant corrections) as well as spatial derivatives of basis functions (which have their own orthogonal expansions, generally not complicated). Thus, the orthogonal coefficients of the derivative can be computed by sums and products, not involving integration over the actual function f and its neighbors. Then a standard integration method such as Adams' method can be used to project f(t) to f(t+d), advancing the solution.

It's a fact of life that orthogonal functions come with parameters such as the center point, the orientation of spherical harmonics, the radial scale, and the momentum space phase offsets. Likely particular values of these parameters will represent the solution a lot better than others, for example if the basis center matches the center of "mass" of the actual field. The program should try to estimate an optimal set of parameters according to the actual configuration. It then needs to shift the current orthogonal expansion to use the new parameters. It should be possible to do this one coordinate at a time, greatly simplifying the work. The integration procedure will need historical field values which will also have to be shifted to the current parameters even if those are not optimal in the historical context.

It is likely that the center point will "move" relative to the start point, and it will be wise to apply an affine transformation of space-time to track this motion and to re-center the center point.


Next Previous Contents