Symplectic integrators#
import sympy.matrices
from sympy import init_session
# get nice printing of matrices
init_session()
# declare the step size h as a symbol
h = sympy.Symbol('h')
IPython console for SymPy 1.12 (Python 3.11.6-64-bit) (ground types: python)
These commands were executed:
>>> from sympy import *
>>> x, y, z, t = symbols('x y z t')
>>> k, m, n = symbols('k m n', integer=True)
>>> f, g, h = symbols('f g h', cls=Function)
>>> init_printing()
Documentation can be found at https://docs.sympy.org/1.12/
We’ll do parts (a) and (b) first, for each method:
Euler#
The update is
which we can write by inspection as $\(\begin{pmatrix}x_{n+1} \\ v_{n+1} \end{pmatrix} = \begin{pmatrix} 1 & h \\ -h & 1 \end{pmatrix} \begin{pmatrix}x_n \\ v_n \end{pmatrix}\)$
# Euler
def J_euler(h):
return sympy.matrices.Matrix([[1,h],[-h,1]])
print(J_euler(h))
print('Determinant = ', J_euler(h).det())
print('Forward step and then backwards is given by ')
pprint(J_euler(-h)*J_euler(h))
Matrix([[1, h], [-h, 1]])
Determinant = h**2 + 1
Forward step and then backwards is given by
⎡ 2 ⎤
⎢h + 1 0 ⎥
⎢ ⎥
⎢ 2 ⎥
⎣ 0 h + 1⎦
We see that the scheme is not reversible - both \(x\) and \(v\) are larger by a factor \(1+h^2\) after a forward-backward step.
Euler-Cromer#
From the notes, the update is given by the matrix
def J_ec(h):
a = sympy.matrices.Matrix([[1,h],[0,1]])
b = sympy.matrices.Matrix([[1,0],[-h,1]])
return a*b
pprint(J_ec(h))
print('Determinant = ', J_ec(h).det())
print('Forward step and then backwards is given by ')
pprint(sympy.simplify(J_ec(-h)*J_ec(h)))
print('Inverse of J is')
pprint(J_ec(h).inv())
⎡ 2 ⎤
⎢1 - h h⎥
⎢ ⎥
⎣ -h 1⎦
Determinant = 1
Forward step and then backwards is given by
⎡ 4 2 3 ⎤
⎢h - h + 1 -h ⎥
⎢ ⎥
⎢ 3 2 ⎥
⎣ -h h + 1⎦
Inverse of J is
⎡1 -h ⎤
⎢ ⎥
⎢ 2⎥
⎣h 1 - h ⎦
The forward-backward step is given by
so Euler-Cromer is not reversible. Another way to see this is that the inverse of the matrix is
whereas it should be \(\mathbf{J}(-h)\) if the method was reversible.
Leapfrog#
Writing each step of the update as a separate matrix gives
def J_leapfrog(h):
a = sympy.matrices.Matrix([[1,0],[-h/2,1]])
b = sympy.matrices.Matrix([[1,h],[0,1]])
return a*b*a
pprint(sympy.simplify(J_leapfrog(h)))
print('Determinant = ', J_leapfrog(h).det())
print('Forward step and then backwards is given by ')
pprint(sympy.simplify(J_leapfrog(-h)*J_leapfrog(h)))
print('Inverse of J is')
pprint(J_leapfrog(h).inv())
⎡ 2 ⎤
⎢ h ⎥
⎢1 - ── h ⎥
⎢ 2 ⎥
⎢ ⎥
⎢ 3 2⎥
⎢h h ⎥
⎢── - h 1 - ──⎥
⎣4 2 ⎦
Determinant = 1
Forward step and then backwards is given by
⎡1 0⎤
⎢ ⎥
⎣0 1⎦
Inverse of J is
⎡ 2 ⎤
⎢ h ⎥
⎢ 1 - ── -h ⎥
⎢ 2 ⎥
⎢ ⎥
⎢ 3 2⎥
⎢ h h ⎥
⎢- ── + h 1 - ──⎥
⎣ 4 2 ⎦
In this case we see that the forward-backward step gives back the identity matrix! So this is time-reversible. The inverse of the matrix is the same as the original matrix but with \(h\rightarrow -h\).
Now do part (c):
If we write
then
print('Euler JT.J = ')
pprint(sympy.simplify(J_euler(h).T * J_euler(h)))
print('\nEuler-Cromer JT.J = ')
pprint(sympy.simplify(J_ec(h).T * J_ec(h)))
print('\nLeapfrog JT.J = ')
pprint(sympy.simplify(J_leapfrog(h).T * J_leapfrog(h)))
x = sympy.Symbol('x')
v = sympy.Symbol('v')
xv = sympy.matrices.Matrix([x, v])
print('\n\nEuler E_n+1 = ')
pprint(sympy.simplify(xv.T * J_euler(h).T * J_euler(h) * xv))
print('\nEuler-Cromer E_n+1 = ')
pprint(sympy.simplify(xv.T * J_ec(h).T * J_ec(h) * xv))
print('\nLeapfrog E_n+1 = ')
pprint(sympy.simplify(xv.T * J_leapfrog(h).T * J_leapfrog(h) * xv))
Euler JT.J =
⎡ 2 ⎤
⎢h + 1 0 ⎥
⎢ ⎥
⎢ 2 ⎥
⎣ 0 h + 1⎦
Euler-Cromer JT.J =
⎡ 4 2 3 ⎤
⎢h - h + 1 -h ⎥
⎢ ⎥
⎢ 3 2 ⎥
⎣ -h h + 1⎦
Leapfrog JT.J =
⎡ 6 4 3 ⎛ 2⎞⎤
⎢h h h ⋅⎝2 - h ⎠⎥
⎢── - ── + 1 ───────────⎥
⎢16 4 8 ⎥
⎢ ⎥
⎢ 3 ⎛ 2⎞ 4 ⎥
⎢h ⋅⎝2 - h ⎠ h ⎥
⎢─────────── ── + 1 ⎥
⎣ 8 4 ⎦
Euler E_n+1 =
⎡ 2 2 2 2 2 2⎤
⎣h ⋅v + h ⋅x + v + x ⎦
Euler-Cromer E_n+1 =
⎡ 4 2 3 2 2 2 2 2 2⎤
⎣h ⋅x - 2⋅h ⋅v⋅x + h ⋅v - h ⋅x + v + x ⎦
Leapfrog E_n+1 =
⎡ 6 2 5 4 2 4 2 3 ⎤
⎢h ⋅x h ⋅v⋅x h ⋅v h ⋅x h ⋅v⋅x 2 2⎥
⎢───── - ────── + ───── - ───── + ────── + v + x ⎥
⎣ 16 4 4 4 2 ⎦
Notice that \(E_{n+1}-E_n\) has terms with different powers of \(h\) with different combinations of \(x\) and \(v\). In the Euler method, we get \(E_{n+1}-E_n = h^2 E_n\), which is always positive and so the energy error increases over time. However, in the leapfrog method you can see that instead we have terms like \(h^4 (x^2-v^2)\) or \(h^3 vx\) which average to zero over an orbit. This is the reason for the much better scaling of the energy error with \(h\).