San José State University |
---|
applet-magic.com Thayer Watkins Silicon Valley & Tornado Alley USA |
---|
Helmholtz Equation of One Dimension |
---|
The Helmholtz equation arises in many contexts in the attempt to give a mathematical explanation of the physical world. These range from Alan Turing's explanation of animal coat patterns to Schrödinger's time-independent equation in quantum theory. The quantum mechanical probability density function for a harmonic oscillator with a principal quantum number of 60 is shown below.
The heavy line is the time-spent probability density function for classical harmonic oscillator of the same energy. As can be seen the spatial average of the quantum mechanical quantities is at least approximately equal to the classical values.
The Helmholtz equation per se is
where k is a constant. The Generalized Helmholtz equation is that equation with k being a function of the independent variable(s).
In one dimension the Helmholtz equation is
It just has the sinusoidal solution of φ(x) = A·sin(kx)+B·cos(kx). In one dimension the Generalized Helmholtz equation has a sinusoidal-like solution of varying amplitude and wavelength.
A sinusoidal solution is an exponential function of ikx where i is the imaginary unit. This suggests that the solution of the generalized equation may be a function of
Then
Since (d²φ/dx²) is equal to −k²φ the above equation can be reduced upon division by −k² to
Let (dφ/dX) be denoted as ψ and (d(1/k)/dx) as γ. Then
In matric form
where
Φ = | | φ | |
| ψ | |
M = | | 0 | 1 | |
| 1 | −iγ | |
Note that γ is a function of x and hence also of X and so is the matrix M.
The matrix M can be decomposed into (J−iγK) where J is the 2×2 matrix with zeroes on the principal diagonal and 1's on the other places and K is the 2×2 matrix of all zeroes except for 1 in the (2,2) position.
For the analogous scalar differential equation the solution would go as follows:
This suggests that the solution to the matrix equation might be
The RHS is in fact the first term of a Magnus series solution for the equation. Let us now consider the function
where
Λ = | | λ | |
| μ | |
and μ(x)=(dλ/dx).
The integral of the matrix M is the following matrix
∫0XM(Z)dZ = | | 0 | X | |
| X | − i∫0xγ(z)dz | |
which is the same as (XJ−i∫0xγ(z)dzK).
The integral of γ expressed as a function of X is the same as its integral expressed as function of x over corresponding ranges. But the integral of γ over the range of 0 to x is [1/k(x)−1/k(0)].
The solution is therefore
| λ(X) | | | 0 | X | | | λ(0) | | |||||
| | | = Exp | { | } | |||||
| μ(X) | | | X | −i∫0xγ(z)dz | | |μ(0) | |
Let Z=∫0xk(z)dz so X=iZ. Then the solution can be represented as
| λ(Z) | | | 0 | iZ | | | λ(0) | | |||||
| | | = Exp | { | } | |||||
| μ(Z) | | | iZ | −i∫0xγ(z)dz | | |μ(0) | |
For the matric exponential function Exp(A+B)=Exp(A)Exp(B)=Exp(B)Exp(A) if and only if AB=BA; i.e., if A and B commute. The matrices J and K do not commute; i.e., JK≠KJ.
JK = | | 0 | 1 | |
| 0 | 0 | |
KJ = | | 0 | 0 | |
| 1 | 0 | |
Obviously iZJ and iLK do not commute because J and K do not commute. Therefore the above solution
However the function
is of interest and ultimately can be related to Λ(Z).
Ω is defined as
Ω = | | ω | |
| ζ | |
and ζ(x)=(dω/dx).
Again note that J is the matrix with 1's on the off diagonal and
iLK = | | 0 | 0 | |
| 0 | −i∫0xγ(z)dz | |
Note that
(iLK)n = | | 0 | 0 | |
| 0 | [−i∫0xγ(z)dz]n | |
therefore Exp(iLK) is given by
Exp(iLK) = | | 0 | 0 | |
| 0 | exp(−i∫0xγ(z)dz ) | |
The oscillatory aspect of the solution for Ω(x) is given by Exp(iZJ) and the moving average part by Exp(iLK), which amounts to
Since γ is equal to (d(1/k)/dx) the integration of γ from 0 to x will give [1/k(x)−1/k(0)] and hence the moving average part by
Constant factors are irrelevant in determining probability density distributions because they cancel out in normalization.
For matrices A and B which do not commute the Baker-Campbell-Hausdorf formula gives a product representation of Exp(A+B). The first term of the series is Exp(A)Exp(B). The second term is −½Exp([A,B]) where [A,B] is the commutator of A with B; i.e., AB−BA. Thus the second approximation of Exp(A+B) is
For the preceding
[J, K] = | | 0 | 1 | |
| −1 | 0 | |
and [iZJ, −iLK] is equal to ZL[J, K].
(To be continued.)
HOME PAGE OF Thayer Watkins, |