$\newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\sint}{\text{s}\kern-5pt\int} \newcommand{\powerset}{\mathcal{P}} \newcommand{\RR}{\mathbb{R}} \newcommand{\NN}{\mathbb{N}} \newcommand{\QQ}{\mathbb{Q}} \newcommand{\ZZ}{\mathbb{Z}} \newcommand{\CC}{\mathbb{C}} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \renewcommand{\vec}[1]{\mathbf{#1}}$

Experiment 001 - Time-Stepping
Authors: Sage Shaw
Mon Aug 13 10:03:26 2018


This experiment will test forward and backward Euler time-stepping. We will verify the order of the error.

  1. Conclusions
  2. ODEs
    1. Hypothesis & Method
    2. Results
  3. System of ODEs
    1. Hypothesis & Method
    2. Results
  4. PDEs
    1. Hypothesis & Method
    2. Results
    3. Correction
  5. References

Conclusions

ODEs

Here we explore forward and backward Euler time-stepping for ODEs of the form $y^\prime = \lambda y$ for several values of lambda. The experiment can be seen in detail in the Jupyter Notebook. The notebook will also require heat_equation.py.

Hypothesis & Method

We will test the forward and backward Euler time-stepping methods for several values of $\lambda$. We expect that for a given target time, as we double the number of time steps we will halve the error at the target time. This would show that the error is $\mathcal{O}(\Delta t)$.

We will use a very simple ODE to test this: $y^\prime = \lambda y$. Using the code below we test several target times target_t, and several values of lam.

target_t = 5
time_steps = [2**i for i in range(1,20)]
lam = -1
y_0 = 1

def foo(y_0, t, lam):
    return np.exp(lam*t)

def forward(y_0, delta_t, lam, n):
    return y_0 * (1+lam*delta_t)**n
        
def backward(y_0, delta_t, lam, n):
    return y_0 * (1-lam*delta_t)**-n

fw_errors = []
bk_errors = []
steps = [target_t/t for t in time_steps]
for step in time_steps:
    delta_t = target_t/step
    fw = forward(y_0, delta_t, lam, step)
    back = backward(y_0, delta_t, lam, step)
    true_value = foo(y_0, target_t, lam)
    fw_errors.append(np.abs(fw - true_value))
    bk_errors.append(np.abs(back - true_value))
print('fw_err\t\tfw_ratio\tback_err\tback_ratio')
print('%f\tN/A \t\t%f\tN/A  ' % (fw_errors[0], bk_errors[0]) )
for fw, fw_prev, bk, bk_prev in zip(fw_errors[1:], fw_errors[:-1], 
        bk_errors[1:], bk_errors[:-1]):
    print('%f\t%f\t%f\t%f' % (fw, fw/fw_prev, bk, bk/bk_prev))

Results

For $\lambda = -1$ A sample output of the errors at time $t=5$ can be seen in the table below. As expected, as $\Delta t$ is havled, so is the error. This confirms that the error is indeed $\mathcal{O}(\Delta t)$.

$\Delta t$ fw_err fw_ratio bk_err bk_ratio
2.500000002.24326205N/A0.07489471N/A
1.250000000.002831700.001262310.032280500.43101171
0.625000000.006346882.241370080.013829150.42840588
0.312500000.004247010.669148710.006156750.44520049
0.156250000.002384420.561435590.002863660.46512633
0.078125000.001255050.526352500.001374960.48014022
0.039062500.000642880.512239980.000672870.48937381
0.019531250.000325240.505903020.000332730.49449984
0.009765620.000163560.502899270.000165440.49720141
0.004882810.000082020.501436810.000082480.49858835
0.002441410.000041070.500715230.000041180.49929106
0.001220700.000020550.500356830.000020580.49964475
0.000610350.000010280.500178220.000010280.49982218
0.000305180.000005140.500089060.000005140.49991104
0.000152590.000002570.500044520.000002570.49995551
0.000076290.000001290.500022260.000001290.49997775
0.000038150.000000640.500011130.000000640.49998887
0.000019070.000000320.500005560.000000320.49999444
0.000009540.000000160.500002780.000000160.49999722

System of ODEs

Hypothesis & Method

We will test Forward and Backward Euler on the initial value system of equations given on pg. 40 of Lambert[1]. $$ \begin{align} u^\prime &= \frac{1}{3}v & u(0) &= \frac{1}{2}\\ v^\prime &= \frac{v(v-1)}{3u} & v(0) &= -3 \end{align} $$ This has a known solution of $$ \begin{align} u(t) &= \frac{1}{8}\left( 1 + 3e^{-\frac{8}{3}t} \right) \\ v(t) &= -3e^{-\frac{8}{3}t}. \end{align} $$ As in the single ODE case above, we expect that for a given target time, as we double the number of time steps we will halve the error at the target time. This would show that the error is $\mathcal{O}(\Delta t)$.

Implementation of forward Euler is straightforward, but since the formulation of $v^\prime$ is non-linear, backward Euler is not so obvious. Given the formulation of backward Euler $$ \begin{align} u_{n+1} &= u_n + \Delta t \tfrac{1}{3}v_{n+1} \\ v_{n+1} & = v_n + \Delta t \frac{v_{n+1}(v_{n+1}-1)}{3u_{n+1}} \end{align} $$ we seek to express $u_{n+1}$ and $v_{n+1}$ in terms of $u_n$ and $v_n$. In general this may be difficult, but in this case it is just a matter of algebra. $$ \begin{align} v_{n+1} &= v_n + \Delta t \frac{v_{n+1}(v_{n+1}-1)}{3u_{n+1}} \\ &= v_n + \tfrac{\Delta t}{3} \frac{v_{n+1}(v_{n+1}-1)}{u_n + \tfrac{\Delta t}{3}v_{n+1}} \\ v_{n+1}(u_n + \tfrac{\Delta t}{3}v_{n+1}) &= v_n(u_n + \tfrac{\Delta t}{3}v_{n+1}) + \tfrac{\Delta t}{3} v_{n+1}(v_{n+1}-1) \\ v_{n+1}u_n + \tfrac{\Delta t}{3}v_{n+1}^2 &= v_nu_n + \tfrac{\Delta t}{3}v_nv_{n+1} + \tfrac{\Delta t}{3} v_{n+1}^2 - \tfrac{\Delta t}{3}v_{n+1} \\ v_{n+1}u_n &= v_nu_n + \tfrac{\Delta t}{3}v_nv_{n+1} - \tfrac{\Delta t}{3}v_{n+1} \\ v_{n+1}(u_n - \tfrac{\Delta t}{3}v_n + \tfrac{\Delta t}{3}) &= v_nu_n \\ v_{n+1} &= \frac{v_nu_n}{u_n - \tfrac{\Delta t}{3}v_n + \tfrac{\Delta t}{3}} \end{align} $$ Substituting back into our formulation for $u_{n+1}$ we finally arive at our recurrance relation $$ \begin{align} u_{n+1} &= u_n + \tfrac{\Delta t}{3}\frac{v_nu_n}{u_n - \tfrac{\Delta t}{3}v_n + \tfrac{\Delta t}{3}} \\ v_{n+1} &= \frac{v_nu_n}{u_n - \tfrac{\Delta t}{3}v_n + \tfrac{\Delta t}{3}}. \end{align} $$

The following modifications to the code in the previous section implements both forward and backward Euler for this problem.

def foo(z):
    u, v = z[0], z[1]
    return np.array((1/3 * v, v*(v-1)/(3*u)))

z0 = np.array([.5, -3])

def forward(delta_t, n):
    z = z0*1
    for i in range(n):
        z += delta_t * foo(z)
    return z

def backward(delta_t, n):
    A = delta_t/3
    u, v = z0[0], z0[1]
    for i in range(n):
        v = v*u/(u-A*v+A)
        u += A*v
    return np.array([u,v])

Results

The following table shows the errors and the ratios of the errors between steps for both forward and backward Euler. As expected the errors get cut in half as we half $\Delta t$, confirming our hypothesis.

$\Delta t$ fw_err fw_ratio bk_err bk_ratio
0.250000000.46101812N/A0.29145927N/A
0.125000000.199741250.433261170.159660250.54779610
0.062500000.093811720.469666240.083928140.52566712
0.031250000.045545900.485503340.043083410.51333685
0.015625000.022449910.492907320.021834810.50680318
0.007812500.011146180.496491170.010992440.50343648
0.003906250.005553640.498254790.005515200.50172712
0.001953120.002771990.499129680.002762380.50086580
0.000976560.001384790.499565410.001382390.50043346
0.000488280.000692090.499782840.000691490.50021687
0.000244140.000345970.499891460.000345820.50010847
0.000122070.000172970.499945740.000172930.50005424
0.000061040.000086480.499972870.000086470.50002713
0.000030520.000043240.499986440.000043240.50001356
0.000015260.000021620.499993220.000021620.50000679
0.000007630.000010810.499996610.000010810.50000334
0.000003810.000005400.499998310.000005400.50000189
0.000001910.000002700.499999150.000002700.50000005
0.000000950.000001350.499999590.000001350.50000363

PDEs

We will use the one dimensional heat eaquation $u_t = u_{xx}$ with dirichlet boundary conditions and an initial temperature distribution of $f(x) = x^2(1-x)$. The analytic solution is depicted in the animation below.

Hypothesis & Method

We first discretize in space letting the step size be $h = \frac{1}{n}$. This gives us the approximation $\vec{u}_t = D \vec{u}$ where $$ D = \frac{1}{h^2}\begin{bmatrix} -2 & 1 & & & \\ 1 & -2 & 1 & & \\ & \ddots & \ddots & \ddots & \\ & & 1 & -2 & 1 \\ & & & 1 & -2 \end{bmatrix} $$ and the eigenvalues of $D$ are given by $\lambda_k = -\frac{4}{h^2}\sin^2( \frac{\pi}{n+1} \frac{k}{2} )$. In order for forward Euler to work, we need that $\norm{1+ \lambda_k \Delta t}<1$ for $k=1, 2, ..., n$. Rearanging that is $$ \norm{1 - \Delta t \frac{4}{h^2}\sin^2 \left( \frac{\pi}{n+1} \frac{k}{2}\right) } < 1. $$ If we are to increase $\Delta t$ as we had previously for ODEs we must ensure that this criteria is met. If we choose $h \propto \sqrt{\Delta t}$ then for some appropriate proportionality constant the criteria will be true for any $\Delta t$.

For the spatial discretization we expect the error to be $\mathcal{O}(h^2)$. For a single timestep we expect the error to be $\mathcal{O}(\Delta t^2 h^2)$. Thus for a given time we expect the error to be $\mathcal{O}(\Delta t h^2)$. If we choose $h^2 \propto \Delta t$ then we will expect the error to be $\mathcal{O}(\Delta t^2)$.

The following code tests this hypotheses.

from heat_equation import *
c = 1
'''
def f(x):
    return np.sin(x*2*np.pi)
f = np.vectorize(f)
def u(x,t):
    return np.sin(x*2*np.pi)*np.exp(-(2*np.pi)**2 * t)
'''

def f(x):
    return np.sin(np.pi*x)
def u(x,t):
    return np.sin(np.pi*x) * np.exp(-np.pi**2 * t)

'''
def f(x):
    return x**2 * (1-x)
f = np.vectorize(f)
u = dirichlet_solution(c,f)
#u = dirichlet_solution(c,f, n=40)

'''

# Time-step n steps
def forward(u0, D, delta_t, num_steps):
    u = u0
    for i in range(num_steps):
        u = u + delta_t * D@u
    return u

def backward(u0, D, delta_t, num_steps):
    u = u0
    L = sp.eye(D.shape[0]) - delta_t*D
    for i in range(num_steps):
        u = spla.spsolve(L, u)
    return u

def bdf2(u0, D, delta_t, num_steps):
    u1 = u0
    L = sp.eye(D.shape[0]) - delta_t*D
    # one step of backward Euler
    u2 = spla.spsolve(L, u1)
    # continue with BDF2
    L = 3/2*sp.eye(D.shape[0]) - delta_t*D
    for i in range(num_steps-1):
        u_3 = spla.spsolve(L, 2*u2 - .5*u1)
        u1, u2 = u2, u_3
    return u2

target_t = .1
ns = [10*2**i + 1 for i in range(0, 6)]

fw_errors = []
bk_errors = []

for n in ns:
    h = 1/(n-1)
    #delta_t = h**2 *.5
    delta_t = h
    num_steps = int(np.round(target_t/delta_t))
    print('Processing num_steps=%d \t n=%d' % (num_steps, n) )
    actuaul_time = num_steps*delta_t
    print('Calculated time: %g' % actuaul_time)
    D = sp.diags([1] + [-2]*(n-2) + [1]) + sp.diags([0] + [1]*(n-2), 1) + sp.diags([1]*(n-2) + [0], -1)
    D = h**-2 * D
    D[0,0] = 1
    D[n-1,n-1] = 1
    xs = np.linspace(0,1, n)
    u0 = f(xs)
    true_value = u(xs, target_t)
    fw = forward(u0, D, delta_t, num_steps)
    #back = backward(u0, D, delta_t, num_steps)
    back = bdf2(u0, D, delta_t, num_steps)
    fw_errors.append(la.norm(fw - true_value))
    bk_errors.append(la.norm(back - true_value))
    
print('fw_err       fw_ratio       back_err       back_ratio')
print('%10.5g        N/A       %10.5g       N/A  ' % (fw_errors[0], bk_errors[0]) )
for fw, fw_prev, bk, bk_prev in zip(fw_errors[1:], fw_errors[:-1], bk_errors[1:], bk_errors[:-1]):
    print('%10.5g    %10.5g    %10.5g    %10.5g' % (fw, fw/fw_prev, bk, bk/bk_prev) )

Results

For a target time of $t=0.1$ with $h \approx \sqrt{\Delta t}$ the errors are given in the table below. For the first two rows, the spatial discretization is $n=2$ so only the two end points are considered. The solution at the end points is $u(0,t) = u(1,t) = 0$ so the solution is exact. As expected, forward Euler is stable in this situation.

$\Delta t$ fw_err fw_ratio bk_err bk_ratio
0.025 6.1171839e-18N/A 6.1171839e-18N/A
0.0125 6.1171839e-18 1 6.1171839e-18 1
0.00625 0.029786062 4.8692441e+15 0.024693141 4.0366843e+15
0.003125 0.027337333 0.91778944 0.024504521 0.99236146
0.0015625 0.022910183 0.83805478 0.021194308 0.86491419
0.00078125 0.019786197 0.86364202 0.018810993 0.88754928
0.000390625 0.016585217 0.83822156 0.016014668 0.85134622
0.0001953125 0.014201978 0.85630345 0.013873999 0.86633076
9.765625e-05 0.0118248 0.83261639 0.011630396 0.83828723
4.8828125e-05 0.0099909454 0.84491455 0.009876969 0.84923753
2.4414063e-05 0.0083896657 0.83972691 0.0083222698 0.84259349
1.2207031e-05 0.0070808438 0.84399594 0.0070411137 0.84605689
6.1035156e-06 0.0059413669 0.83907611 0.0059177772 0.8404604

For a target time of $t=0.01$ with $h \approx \Delta t$ the errors are given below. Forward Euler does not remain stable as $\Delta t$ decreases.

$\Delta t$ fw_err fw_ratio bk_err bk_ratio
0.002500000.01186183N/A0.01011415N/A
0.001250000.008798560.741753600.007390920.73075027
0.000625000.006456830.733851130.005493770.74331290
0.000312500.004635890.717981860.003979570.72437840
0.000156254414901.91684370952331526.896682980.002845270.71496929
0.00007813infinf0.002022420.71080311
0.00003906infinf0.001433680.70889251
0.00001953nannan0.001015020.70798367

For a target time of $t=0.1$ with $h \approx \Delta t$ the errors are given below.

$\Delta t$ fw_err fw_ratio bk_err bk_ratio
0.025000000.03853758N/A0.01801736N/A
0.012500000.5976043715.507053590.015559520.86358442
0.0062500067521636.80649941112987187.792998490.011874600.76317258
0.00312500infinf0.008676670.73069146
0.00156250infinf0.006229000.71790207
0.00078125infinf0.004436710.71226805
0.00039063nannan0.003148430.70962985
0.00019531nannan0.002230200.70835412

The order of convergence seems to be roughly $\mathcal{O}(\sqrt{\Delta t})$. That is, when $\Delta t$ is quartered, the error is halved.

Correction

The issue was the calculations of the error ratio:

    fw_errors.append(la.norm(fw - true_value))
    bk_errors.append(la.norm(back - true_value))

Testing showed that both the relative $L_2$ norm and the $L_\infty$ norm gave correct results. In theory these should not affect the order of convergence, however, Between each step the dimension of the vector doubles so the overall norm is affected. The relative error compensates for this and is thus unaffected. The $L_\infty$ norm is also unaffected since the maximum value is not affected by the number of values in the vector.

The corrected lines read

    # relative L_2 norm
    #fw_errors.append(la.norm(fw - true_value)/la.norm(true_value))
    #bk_errors.append(la.norm(back - true_value)/la.norm(true_value))

or

    # L_\infty norm
    fw_errors.append(np.max(np.abs(fw - true_value)))
    bk_errors.append(np.max(np.abs(back - true_value)))

The results now are as expected. For the relative $L_2$ norm we have the following errors.

$\Delta t$ fw_err fw_ratio bk_err bk_ratio
0.1 1.9513734N/A 1.9811852N/A
0.05 1.9580811 1.0034374 1.9729271 0.99583172
0.025 0.09898413 0.0505516 0.29391721 0.1489752
0.0125 0.15497225 1.5656272 0.25167043 0.85626299
0.00625 0.021353803 0.13779115 0.079099332 0.31429728
0.003125 0.017661767 0.82710168 0.04709118 0.59534233
0.0015625 0.0051387323 0.29095233 0.020161315 0.42813357
0.00078125 0.0029217562 0.56857529 0.010479506 0.51978287
0.000390625 0.0012724167 0.43549724 0.005065084 0.48333231
0.0001953125 0.00072702771 0.57137547 0.0026262749 0.5185057
9.765625e-05 0.00031734098 0.4364909 0.0012678278 0.48274754
4.8828125e-05 0.00016311025 0.51399049 0.00063854489 0.50365271
2.4414063e-05 7.9287598e-05 0.48609821 0.00031705439 0.49652639
1.2207031e-05 4.0765382e-05 0.51414576 0.00015966073 0.50357522
6.1035156e-06 1.9818923e-05 0.48617041 7.9269691e-05 0.49648833

For the $L_\infty$ norm we have the following errors.

$\Delta t$ fw_err fw_ratio bk_err bk_ratio
0.1 8.9067602e-17N/A 9.042832e-17N/A
0.05 8.9373763e-17 1.0034374 9.005139e-17 0.99583172
0.025 0.036892161 4.1278514e+14 0.10954525 1.2164748e+15
0.0125 0.057759371 1.5656272 0.093799541 0.85626299
0.00625 0.0079587299 0.13779115 0.029480941 0.31429728
0.003125 0.0062604997 0.78662045 0.016692233 0.5662042
0.0015625 0.0019152458 0.30592539 0.0075142801 0.45016627
0.00078125 0.0010778774 0.56278801 0.0038660388 0.51449223
0.000390625 0.00047423969 0.43997555 0.0018877965 0.48830253
0.0001953125 0.00027096893 0.57137547 0.00097883326 0.5185057
9.765625e-05 0.00011827547 0.4364909 0.00047252935 0.48274754
4.8828125e-05 6.0755434e-05 0.51367738 0.00023784571 0.5033459
2.4414063e-05 2.9551109e-05 0.48639451 0.00011816866 0.49682904
1.2207031e-05 1.5193578e-05 0.51414576 5.9506807e-05 0.50357522
6.1035156e-06 7.3866678e-06 0.48617041 2.9544435e-05 0.49648833

References

  1. J. D. Lambert. Numerical Methods for Ordinary Differential Systems: The Initial Value Problem. John Wiley & Sons, Inc., New York, NY, USA, 1991. ISBN 0-471-92990-5.