This experiment will test forward and backward Euler time-stepping. We will verify the order of the error.
Here we explore forward and backward Euler time-stepping for ODEs of the form $y^\prime = \lambda y$ for several values of lambda. The experiment can be seen in detail in the Jupyter Notebook. The notebook will also require heat_equation.py.
We will test the forward and backward Euler time-stepping methods for several values of $\lambda$. We expect that for a given target time, as we double the number of time steps we will halve the error at the target time. This would show that the error is $\mathcal{O}(\Delta t)$.
We will use a very simple ODE to test this: $y^\prime = \lambda y$. Using the code below we test several target times target_t, and several values of lam.
target_t = 5
time_steps = [2**i for i in range(1,20)]
lam = -1
y_0 = 1
def foo(y_0, t, lam):
return np.exp(lam*t)
def forward(y_0, delta_t, lam, n):
return y_0 * (1+lam*delta_t)**n
def backward(y_0, delta_t, lam, n):
return y_0 * (1-lam*delta_t)**-n
fw_errors = []
bk_errors = []
steps = [target_t/t for t in time_steps]
for step in time_steps:
delta_t = target_t/step
fw = forward(y_0, delta_t, lam, step)
back = backward(y_0, delta_t, lam, step)
true_value = foo(y_0, target_t, lam)
fw_errors.append(np.abs(fw - true_value))
bk_errors.append(np.abs(back - true_value))
print('fw_err\t\tfw_ratio\tback_err\tback_ratio')
print('%f\tN/A \t\t%f\tN/A ' % (fw_errors[0], bk_errors[0]) )
for fw, fw_prev, bk, bk_prev in zip(fw_errors[1:], fw_errors[:-1],
bk_errors[1:], bk_errors[:-1]):
print('%f\t%f\t%f\t%f' % (fw, fw/fw_prev, bk, bk/bk_prev))
For $\lambda = -1$ A sample output of the errors at time $t=5$ can be seen in the table below. As expected, as $\Delta t$ is havled, so is the error. This confirms that the error is indeed $\mathcal{O}(\Delta t)$.
$\Delta t$ | fw_err | fw_ratio | bk_err | bk_ratio |
---|---|---|---|---|
2.50000000 | 2.24326205 | N/A | 0.07489471 | N/A |
1.25000000 | 0.00283170 | 0.00126231 | 0.03228050 | 0.43101171 |
0.62500000 | 0.00634688 | 2.24137008 | 0.01382915 | 0.42840588 |
0.31250000 | 0.00424701 | 0.66914871 | 0.00615675 | 0.44520049 |
0.15625000 | 0.00238442 | 0.56143559 | 0.00286366 | 0.46512633 |
0.07812500 | 0.00125505 | 0.52635250 | 0.00137496 | 0.48014022 |
0.03906250 | 0.00064288 | 0.51223998 | 0.00067287 | 0.48937381 |
0.01953125 | 0.00032524 | 0.50590302 | 0.00033273 | 0.49449984 |
0.00976562 | 0.00016356 | 0.50289927 | 0.00016544 | 0.49720141 |
0.00488281 | 0.00008202 | 0.50143681 | 0.00008248 | 0.49858835 |
0.00244141 | 0.00004107 | 0.50071523 | 0.00004118 | 0.49929106 |
0.00122070 | 0.00002055 | 0.50035683 | 0.00002058 | 0.49964475 |
0.00061035 | 0.00001028 | 0.50017822 | 0.00001028 | 0.49982218 |
0.00030518 | 0.00000514 | 0.50008906 | 0.00000514 | 0.49991104 |
0.00015259 | 0.00000257 | 0.50004452 | 0.00000257 | 0.49995551 |
0.00007629 | 0.00000129 | 0.50002226 | 0.00000129 | 0.49997775 |
0.00003815 | 0.00000064 | 0.50001113 | 0.00000064 | 0.49998887 |
0.00001907 | 0.00000032 | 0.50000556 | 0.00000032 | 0.49999444 |
0.00000954 | 0.00000016 | 0.50000278 | 0.00000016 | 0.49999722 |
We will test Forward and Backward Euler on the initial value system of equations given on pg. 40 of Lambert[1]. $$ \begin{align} u^\prime &= \frac{1}{3}v & u(0) &= \frac{1}{2}\\ v^\prime &= \frac{v(v-1)}{3u} & v(0) &= -3 \end{align} $$ This has a known solution of $$ \begin{align} u(t) &= \frac{1}{8}\left( 1 + 3e^{-\frac{8}{3}t} \right) \\ v(t) &= -3e^{-\frac{8}{3}t}. \end{align} $$ As in the single ODE case above, we expect that for a given target time, as we double the number of time steps we will halve the error at the target time. This would show that the error is $\mathcal{O}(\Delta t)$.
Implementation of forward Euler is straightforward, but since the formulation of $v^\prime$ is non-linear, backward Euler is not so obvious. Given the formulation of backward Euler $$ \begin{align} u_{n+1} &= u_n + \Delta t \tfrac{1}{3}v_{n+1} \\ v_{n+1} & = v_n + \Delta t \frac{v_{n+1}(v_{n+1}-1)}{3u_{n+1}} \end{align} $$ we seek to express $u_{n+1}$ and $v_{n+1}$ in terms of $u_n$ and $v_n$. In general this may be difficult, but in this case it is just a matter of algebra. $$ \begin{align} v_{n+1} &= v_n + \Delta t \frac{v_{n+1}(v_{n+1}-1)}{3u_{n+1}} \\ &= v_n + \tfrac{\Delta t}{3} \frac{v_{n+1}(v_{n+1}-1)}{u_n + \tfrac{\Delta t}{3}v_{n+1}} \\ v_{n+1}(u_n + \tfrac{\Delta t}{3}v_{n+1}) &= v_n(u_n + \tfrac{\Delta t}{3}v_{n+1}) + \tfrac{\Delta t}{3} v_{n+1}(v_{n+1}-1) \\ v_{n+1}u_n + \tfrac{\Delta t}{3}v_{n+1}^2 &= v_nu_n + \tfrac{\Delta t}{3}v_nv_{n+1} + \tfrac{\Delta t}{3} v_{n+1}^2 - \tfrac{\Delta t}{3}v_{n+1} \\ v_{n+1}u_n &= v_nu_n + \tfrac{\Delta t}{3}v_nv_{n+1} - \tfrac{\Delta t}{3}v_{n+1} \\ v_{n+1}(u_n - \tfrac{\Delta t}{3}v_n + \tfrac{\Delta t}{3}) &= v_nu_n \\ v_{n+1} &= \frac{v_nu_n}{u_n - \tfrac{\Delta t}{3}v_n + \tfrac{\Delta t}{3}} \end{align} $$ Substituting back into our formulation for $u_{n+1}$ we finally arive at our recurrance relation $$ \begin{align} u_{n+1} &= u_n + \tfrac{\Delta t}{3}\frac{v_nu_n}{u_n - \tfrac{\Delta t}{3}v_n + \tfrac{\Delta t}{3}} \\ v_{n+1} &= \frac{v_nu_n}{u_n - \tfrac{\Delta t}{3}v_n + \tfrac{\Delta t}{3}}. \end{align} $$
The following modifications to the code in the previous section implements both forward and backward Euler for this problem.
def foo(z):
u, v = z[0], z[1]
return np.array((1/3 * v, v*(v-1)/(3*u)))
z0 = np.array([.5, -3])
def forward(delta_t, n):
z = z0*1
for i in range(n):
z += delta_t * foo(z)
return z
def backward(delta_t, n):
A = delta_t/3
u, v = z0[0], z0[1]
for i in range(n):
v = v*u/(u-A*v+A)
u += A*v
return np.array([u,v])
The following table shows the errors and the ratios of the errors between steps for both forward and backward Euler. As expected the errors get cut in half as we half $\Delta t$, confirming our hypothesis.
$\Delta t$ | fw_err | fw_ratio | bk_err | bk_ratio |
---|---|---|---|---|
0.25000000 | 0.46101812 | N/A | 0.29145927 | N/A |
0.12500000 | 0.19974125 | 0.43326117 | 0.15966025 | 0.54779610 |
0.06250000 | 0.09381172 | 0.46966624 | 0.08392814 | 0.52566712 |
0.03125000 | 0.04554590 | 0.48550334 | 0.04308341 | 0.51333685 |
0.01562500 | 0.02244991 | 0.49290732 | 0.02183481 | 0.50680318 |
0.00781250 | 0.01114618 | 0.49649117 | 0.01099244 | 0.50343648 |
0.00390625 | 0.00555364 | 0.49825479 | 0.00551520 | 0.50172712 |
0.00195312 | 0.00277199 | 0.49912968 | 0.00276238 | 0.50086580 |
0.00097656 | 0.00138479 | 0.49956541 | 0.00138239 | 0.50043346 |
0.00048828 | 0.00069209 | 0.49978284 | 0.00069149 | 0.50021687 |
0.00024414 | 0.00034597 | 0.49989146 | 0.00034582 | 0.50010847 |
0.00012207 | 0.00017297 | 0.49994574 | 0.00017293 | 0.50005424 |
0.00006104 | 0.00008648 | 0.49997287 | 0.00008647 | 0.50002713 |
0.00003052 | 0.00004324 | 0.49998644 | 0.00004324 | 0.50001356 |
0.00001526 | 0.00002162 | 0.49999322 | 0.00002162 | 0.50000679 |
0.00000763 | 0.00001081 | 0.49999661 | 0.00001081 | 0.50000334 |
0.00000381 | 0.00000540 | 0.49999831 | 0.00000540 | 0.50000189 |
0.00000191 | 0.00000270 | 0.49999915 | 0.00000270 | 0.50000005 |
0.00000095 | 0.00000135 | 0.49999959 | 0.00000135 | 0.50000363 |
We will use the one dimensional heat eaquation $u_t = u_{xx}$ with dirichlet boundary conditions and an initial temperature distribution of $f(x) = x^2(1-x)$. The analytic solution is depicted in the animation below.
We first discretize in space letting the step size be $h = \frac{1}{n}$. This gives us the approximation $\vec{u}_t = D \vec{u}$ where $$ D = \frac{1}{h^2}\begin{bmatrix} -2 & 1 & & & \\ 1 & -2 & 1 & & \\ & \ddots & \ddots & \ddots & \\ & & 1 & -2 & 1 \\ & & & 1 & -2 \end{bmatrix} $$ and the eigenvalues of $D$ are given by $\lambda_k = -\frac{4}{h^2}\sin^2( \frac{\pi}{n+1} \frac{k}{2} )$. In order for forward Euler to work, we need that $\norm{1+ \lambda_k \Delta t}<1$ for $k=1, 2, ..., n$. Rearanging that is $$ \norm{1 - \Delta t \frac{4}{h^2}\sin^2 \left( \frac{\pi}{n+1} \frac{k}{2}\right) } < 1. $$ If we are to increase $\Delta t$ as we had previously for ODEs we must ensure that this criteria is met. If we choose $h \propto \sqrt{\Delta t}$ then for some appropriate proportionality constant the criteria will be true for any $\Delta t$.
For the spatial discretization we expect the error to be $\mathcal{O}(h^2)$. For a single timestep we expect the error to be $\mathcal{O}(\Delta t^2 h^2)$. Thus for a given time we expect the error to be $\mathcal{O}(\Delta t h^2)$. If we choose $h^2 \propto \Delta t$ then we will expect the error to be $\mathcal{O}(\Delta t^2)$.
The following code tests this hypotheses.
from heat_equation import *
c = 1
'''
def f(x):
return np.sin(x*2*np.pi)
f = np.vectorize(f)
def u(x,t):
return np.sin(x*2*np.pi)*np.exp(-(2*np.pi)**2 * t)
'''
def f(x):
return np.sin(np.pi*x)
def u(x,t):
return np.sin(np.pi*x) * np.exp(-np.pi**2 * t)
'''
def f(x):
return x**2 * (1-x)
f = np.vectorize(f)
u = dirichlet_solution(c,f)
#u = dirichlet_solution(c,f, n=40)
'''
# Time-step n steps
def forward(u0, D, delta_t, num_steps):
u = u0
for i in range(num_steps):
u = u + delta_t * D@u
return u
def backward(u0, D, delta_t, num_steps):
u = u0
L = sp.eye(D.shape[0]) - delta_t*D
for i in range(num_steps):
u = spla.spsolve(L, u)
return u
def bdf2(u0, D, delta_t, num_steps):
u1 = u0
L = sp.eye(D.shape[0]) - delta_t*D
# one step of backward Euler
u2 = spla.spsolve(L, u1)
# continue with BDF2
L = 3/2*sp.eye(D.shape[0]) - delta_t*D
for i in range(num_steps-1):
u_3 = spla.spsolve(L, 2*u2 - .5*u1)
u1, u2 = u2, u_3
return u2
target_t = .1
ns = [10*2**i + 1 for i in range(0, 6)]
fw_errors = []
bk_errors = []
for n in ns:
h = 1/(n-1)
#delta_t = h**2 *.5
delta_t = h
num_steps = int(np.round(target_t/delta_t))
print('Processing num_steps=%d \t n=%d' % (num_steps, n) )
actuaul_time = num_steps*delta_t
print('Calculated time: %g' % actuaul_time)
D = sp.diags([1] + [-2]*(n-2) + [1]) + sp.diags([0] + [1]*(n-2), 1) + sp.diags([1]*(n-2) + [0], -1)
D = h**-2 * D
D[0,0] = 1
D[n-1,n-1] = 1
xs = np.linspace(0,1, n)
u0 = f(xs)
true_value = u(xs, target_t)
fw = forward(u0, D, delta_t, num_steps)
#back = backward(u0, D, delta_t, num_steps)
back = bdf2(u0, D, delta_t, num_steps)
fw_errors.append(la.norm(fw - true_value))
bk_errors.append(la.norm(back - true_value))
print('fw_err fw_ratio back_err back_ratio')
print('%10.5g N/A %10.5g N/A ' % (fw_errors[0], bk_errors[0]) )
for fw, fw_prev, bk, bk_prev in zip(fw_errors[1:], fw_errors[:-1], bk_errors[1:], bk_errors[:-1]):
print('%10.5g %10.5g %10.5g %10.5g' % (fw, fw/fw_prev, bk, bk/bk_prev) )
For a target time of $t=0.1$ with $h \approx \sqrt{\Delta t}$ the errors are given in the table below. For the first two rows, the spatial discretization is $n=2$ so only the two end points are considered. The solution at the end points is $u(0,t) = u(1,t) = 0$ so the solution is exact. As expected, forward Euler is stable in this situation.
$\Delta t$ | fw_err | fw_ratio | bk_err | bk_ratio |
---|---|---|---|---|
0.025 | 6.1171839e-18 | N/A | 6.1171839e-18 | N/A |
0.0125 | 6.1171839e-18 | 1 | 6.1171839e-18 | 1 |
0.00625 | 0.029786062 | 4.8692441e+15 | 0.024693141 | 4.0366843e+15 |
0.003125 | 0.027337333 | 0.91778944 | 0.024504521 | 0.99236146 |
0.0015625 | 0.022910183 | 0.83805478 | 0.021194308 | 0.86491419 |
0.00078125 | 0.019786197 | 0.86364202 | 0.018810993 | 0.88754928 |
0.000390625 | 0.016585217 | 0.83822156 | 0.016014668 | 0.85134622 |
0.0001953125 | 0.014201978 | 0.85630345 | 0.013873999 | 0.86633076 |
9.765625e-05 | 0.0118248 | 0.83261639 | 0.011630396 | 0.83828723 |
4.8828125e-05 | 0.0099909454 | 0.84491455 | 0.009876969 | 0.84923753 |
2.4414063e-05 | 0.0083896657 | 0.83972691 | 0.0083222698 | 0.84259349 |
1.2207031e-05 | 0.0070808438 | 0.84399594 | 0.0070411137 | 0.84605689 |
6.1035156e-06 | 0.0059413669 | 0.83907611 | 0.0059177772 | 0.8404604 |
For a target time of $t=0.01$ with $h \approx \Delta t$ the errors are given below. Forward Euler does not remain stable as $\Delta t$ decreases.
$\Delta t$ | fw_err | fw_ratio | bk_err | bk_ratio |
---|---|---|---|---|
0.00250000 | 0.01186183 | N/A | 0.01011415 | N/A |
0.00125000 | 0.00879856 | 0.74175360 | 0.00739092 | 0.73075027 |
0.00062500 | 0.00645683 | 0.73385113 | 0.00549377 | 0.74331290 |
0.00031250 | 0.00463589 | 0.71798186 | 0.00397957 | 0.72437840 |
0.00015625 | 4414901.91684370 | 952331526.89668298 | 0.00284527 | 0.71496929 |
0.00007813 | inf | inf | 0.00202242 | 0.71080311 |
0.00003906 | inf | inf | 0.00143368 | 0.70889251 |
0.00001953 | nan | nan | 0.00101502 | 0.70798367 |
For a target time of $t=0.1$ with $h \approx \Delta t$ the errors are given below.
$\Delta t$ | fw_err | fw_ratio | bk_err | bk_ratio |
---|---|---|---|---|
0.02500000 | 0.03853758 | N/A | 0.01801736 | N/A |
0.01250000 | 0.59760437 | 15.50705359 | 0.01555952 | 0.86358442 |
0.00625000 | 67521636.80649941 | 112987187.79299849 | 0.01187460 | 0.76317258 |
0.00312500 | inf | inf | 0.00867667 | 0.73069146 |
0.00156250 | inf | inf | 0.00622900 | 0.71790207 |
0.00078125 | inf | inf | 0.00443671 | 0.71226805 |
0.00039063 | nan | nan | 0.00314843 | 0.70962985 |
0.00019531 | nan | nan | 0.00223020 | 0.70835412 |
The order of convergence seems to be roughly $\mathcal{O}(\sqrt{\Delta t})$. That is, when $\Delta t$ is quartered, the error is halved.
The issue was the calculations of the error ratio:
fw_errors.append(la.norm(fw - true_value))
bk_errors.append(la.norm(back - true_value))
Testing showed that both the relative $L_2$ norm and the $L_\infty$ norm gave correct results. In theory these should not affect the order of convergence, however, Between each step the dimension of the vector doubles so the overall norm is affected. The relative error compensates for this and is thus unaffected. The $L_\infty$ norm is also unaffected since the maximum value is not affected by the number of values in the vector.
The corrected lines read
# relative L_2 norm
#fw_errors.append(la.norm(fw - true_value)/la.norm(true_value))
#bk_errors.append(la.norm(back - true_value)/la.norm(true_value))
or
# L_\infty norm
fw_errors.append(np.max(np.abs(fw - true_value)))
bk_errors.append(np.max(np.abs(back - true_value)))
The results now are as expected. For the relative $L_2$ norm we have the following errors.
$\Delta t$ | fw_err | fw_ratio | bk_err | bk_ratio |
---|---|---|---|---|
0.1 | 1.9513734 | N/A | 1.9811852 | N/A |
0.05 | 1.9580811 | 1.0034374 | 1.9729271 | 0.99583172 |
0.025 | 0.09898413 | 0.0505516 | 0.29391721 | 0.1489752 |
0.0125 | 0.15497225 | 1.5656272 | 0.25167043 | 0.85626299 |
0.00625 | 0.021353803 | 0.13779115 | 0.079099332 | 0.31429728 |
0.003125 | 0.017661767 | 0.82710168 | 0.04709118 | 0.59534233 |
0.0015625 | 0.0051387323 | 0.29095233 | 0.020161315 | 0.42813357 |
0.00078125 | 0.0029217562 | 0.56857529 | 0.010479506 | 0.51978287 |
0.000390625 | 0.0012724167 | 0.43549724 | 0.005065084 | 0.48333231 |
0.0001953125 | 0.00072702771 | 0.57137547 | 0.0026262749 | 0.5185057 |
9.765625e-05 | 0.00031734098 | 0.4364909 | 0.0012678278 | 0.48274754 |
4.8828125e-05 | 0.00016311025 | 0.51399049 | 0.00063854489 | 0.50365271 |
2.4414063e-05 | 7.9287598e-05 | 0.48609821 | 0.00031705439 | 0.49652639 |
1.2207031e-05 | 4.0765382e-05 | 0.51414576 | 0.00015966073 | 0.50357522 |
6.1035156e-06 | 1.9818923e-05 | 0.48617041 | 7.9269691e-05 | 0.49648833 |
For the $L_\infty$ norm we have the following errors.
$\Delta t$ | fw_err | fw_ratio | bk_err | bk_ratio |
---|---|---|---|---|
0.1 | 8.9067602e-17 | N/A | 9.042832e-17 | N/A |
0.05 | 8.9373763e-17 | 1.0034374 | 9.005139e-17 | 0.99583172 |
0.025 | 0.036892161 | 4.1278514e+14 | 0.10954525 | 1.2164748e+15 |
0.0125 | 0.057759371 | 1.5656272 | 0.093799541 | 0.85626299 |
0.00625 | 0.0079587299 | 0.13779115 | 0.029480941 | 0.31429728 |
0.003125 | 0.0062604997 | 0.78662045 | 0.016692233 | 0.5662042 |
0.0015625 | 0.0019152458 | 0.30592539 | 0.0075142801 | 0.45016627 |
0.00078125 | 0.0010778774 | 0.56278801 | 0.0038660388 | 0.51449223 |
0.000390625 | 0.00047423969 | 0.43997555 | 0.0018877965 | 0.48830253 |
0.0001953125 | 0.00027096893 | 0.57137547 | 0.00097883326 | 0.5185057 |
9.765625e-05 | 0.00011827547 | 0.4364909 | 0.00047252935 | 0.48274754 |
4.8828125e-05 | 6.0755434e-05 | 0.51367738 | 0.00023784571 | 0.5033459 |
2.4414063e-05 | 2.9551109e-05 | 0.48639451 | 0.00011816866 | 0.49682904 |
1.2207031e-05 | 1.5193578e-05 | 0.51414576 | 5.9506807e-05 | 0.50357522 |
6.1035156e-06 | 7.3866678e-06 | 0.48617041 | 2.9544435e-05 | 0.49648833 |