Chapt 1: Basics

Feng Wan

B832@仲英楼 or 3133@泓理楼 // School of Physcis @ Xian Jiaoton University

1 Differential

1.1 Why differentials

我们碰到的函数往往没有解析形式,例如可能是如下数表形式 如利用计算所得势能曲线上的点求梯度力。

\(x\) 0.1 0.2 0.3 0.4 0.5
\(f(x)\) 4 4.5 6 8 8.5

这就需要借助于数值微分。 当函数\(f(x)\)过于复杂时,也可借助数值微分。

更重要的,数值微分是其它很多数值方法的基础

1.1.1 Start from 1st order

微积分中,关于导数的定义如下:

\[ f'(x) = \lim_{\hbar \rightarrow 0} \frac{f(x+h)-f(x)}{h} = \lim_{\hbar \rightarrow 0} \frac{f(x+h) - f(x-h)}{2h} \] 自然,而又简单的方法就是,取极限的近似值,即差商(差分)

Forward differential

\[ f'(x_0) \simeq \frac{f(x_0 + h) - f(x_0)}{h} \]

由Taylor展开

\[\begin{equation} \begin{aligned} f(x_0 + h) &= f(x_0) + hf'(x_0) + \frac{h^2}{2!}f''(x_0) + ... \\ &=f(x_0) + hf'(x_0) + \frac{h^2}{2!}f''(\xi),~~x_0 \leq \xi \leq x_0 + h \end{aligned} \end{equation}\]

\[ \Rightarrow f'(x_0) = \frac{f(x_0 + h) - f(x_0)}{h} - \frac{h}{2!}f''(\xi),~~x_0 \leq \xi \leq x_0 + h \]

有误差:

\[ R(x) = f'(x_0) - \frac{f(x_0 + h) - f(x_0)}{h} = - \frac{h}{2!}f''(\xi) = O(h) \]

Backward differential

\[ f'(x_0) \simeq \frac{f(x_0) - f(x_0 - h)}{h} \]

由Taylor展开

\[\begin{equation} \begin{aligned} f(x_0 - h) &= f(x_0) - hf'(x_0) + \frac{h^2}{2!}f''(x_0) - ... \\ &=f(x_0) - hf'(x_0) + \frac{h^2}{2!}f''(\xi),~~x_0 -h \leq \xi \leq x_0 \end{aligned} \end{equation}\]

\[ \Rightarrow f'(x_0) = \frac{f(x_0) - f(x_0 - h)}{h} + \frac{h}{2!}f''(\xi),~~x_0 - h \leq \xi \leq x_0 \]

有误差:

\[ R(x) = f'(x_0) - \frac{f(x_0) - f(x_0 - h)}{h} = \frac{h}{2!}f''(\xi) = O(h) \]

Central differential

\[ f'(x_0) \simeq \frac{f(x_0 + h) - f(x_0 - h)}{2h} \]

由Taylor展开

\[\begin{equation} \begin{aligned} f(x_0 + h) &= f(x_0) + hf'(x_0) + \frac{h^2}{2!}f''(x_0) + \frac{h^3}{3!}f'''(x_0) + ... \\ &=f(x_0) + hf'(x_0) + \frac{h^2}{2!}f''(x_0) + \frac{h^3}{3!}f'''(\xi),~~x_0 \leq \xi \leq x_0 + h \end{aligned} \end{equation}\]
\[\begin{equation} \begin{aligned} f(x_0 - h) &= f(x_0) - hf'(x_0) + \frac{h^2}{2!}f''(x_0) - \frac{h^3}{3!}f'''(x_0) + ... \\ &=f(x_0) - hf'(x_0) + \frac{h^2}{2!}f''(x_0) - \frac{h^3}{3!}f'''(\xi),~~x_0 -h \leq \xi \leq x_0 \end{aligned} \end{equation}\]

\[ \Rightarrow f'(x_0) = \frac{f(x_0 + h) - f(x_0 - h)}{2h} - [\frac{h^2}{12}f'''(\xi_1) + \frac{h^2}{12}f'''(\xi_2)] \simeq \frac{f(x_0 + h) - f(x_0 - h)}{2h}, \] for \(~~x_0 - h \leq \xi \leq x_0 + h\)

有误差:

\[ R(x) = f'(x_0) - \frac{f(x_0 + h) - f(x_0 - h)}{2h} = - \frac{h^2}{6}f'''(\xi) = O(h^2) \]

\[f_{\pm 1} \equiv f(x_0 \pm h) = f(x_0) \pm hf'(x_0) + \frac{h^2}{2}f''(x_0) \pm \frac{h^3}{6}f'''(x_0) + O(h^4)\]

\[f_{\pm 2} \equiv f(x_0 \pm 2h) = f(x_0) \pm 2hf'(x_0) + 2h^2f''(x_0) \pm \frac{4}{3}h^3f'''(x_0) + O(h^4)\]

\[f_{+1} - f_{-1} = 2hf'(x_0) + \frac{h^3}{3}f'''(x_0) + O(h^5)\]

\[f_{+2} - f_{-2} = 4hf'(x_0) + \frac{8}{3}h^3f'''(x_0) + O(h^5)\]

\[8(f_{+1} - f_{-1}) - (f_{+2} - f_{-2}) = 12hf'(x_0) + O(h^5)\]

5-points formula:

\[f'(x_0) = \frac{1}{12h}[8(f_{+1} - f_{-1}) - (f_{+2} - f_{-2})] + O(h^4)\]

with \(f_i \equiv f(x_0 + i h)\).

虽然精度更大,但是计算量也更大!通常用三点公式已足够。

%寻找合适步长,舍入误差讨论,求d(sinx)/dx=0.540302(x=1)的数值微分
format long
clear;clc

h=0.1;       %舍入误差从0.00001开始体现出来,比较10^-5与10^-6的误差
x0=1;
%精确结果
y_e=cos(x0);

%二点向前法
x_f=[x0,x0+h];
y_f=sin(x_f);
y_f_diff=(y_f(2)-y_f(1))/h;
delta_f=y_e-y_f_diff;

%二点向后法
x_b=[x0-h,x0];
y_b=sin(x_b);
y_b_diff=(y_b(2)-y_b(1))/h;
delta_b=y_e-y_b_diff;

 %三点法
x=[x0-h,x0,x0+h]; 
y=sin(x);
y_diff3=(y(3)-y(1))/(2*h);
delta3=y_e-y_diff3;

%五点法
x1=[x0-2*h,x0-h,x0,x0+h,x0+2*h];
y1=sin(x1);
y_diff5=(8*(y1(4)-y1(2))-(y1(5)-y1(1)))/(12*h);
delta5=y_e-y_diff5;

fprintf('中心差分法、向前差分法、向后差分法、5点公式的误差分别为'),delta3,delta_f,delta_b,delta5





Code
from numpy import *
import numpy as np

print('results:')
print('\th\t\t central\t forward\t backward\t 5-point')
def printonce(h):
  x0 = 1
  y_e = cos(x0)
  
  # two points forward
  x_f=array([x0,x0+h])
  y_f=sin(x_f)
  y_f_diff=(y_f[1]-y_f[0])/h
  delta_f=y_e-y_f_diff
  
  # two points backward
  x_b=array([x0-h,x0])
  y_b=sin(x_b)
  y_b_diff=(y_b[1]-y_b[0])/h
  delta_b=y_e-y_b_diff
  
  # 3 points forward
  x=array([x0-h,x0,x0+h])
  y=sin(x)
  y_diff3=(y[2]-y[0])/(2*h)
  delta3=y_e-y_diff3
  
  # 5 points forward
  x1=array([x0-2*h,x0-h,x0,x0+h,x0+2*h])
  y1=sin(x1)
  y_diff5=(8*(y1[3]-y1[1])-(y1[4]-y1[0]))/(12*h)
  delta5=y_e-y_diff5

  print('%.6f\t %.6f\t %.6f\t %.6f\t %.6f\t' %(h, delta3,delta_f,delta_b,delta5))

hs = [0.5, 0.2, 0.1, 0.05, 0.02, 0.01, 0.005, 0.002, 0.001, 0.0005, 0.0002, 0.0001]
for h in hs:
  printonce(h)
results:
    h        central     forward     backward    5-point
0.500000     0.022233    0.228254    -0.183789   0.001093   
0.200000     0.003595    0.087462    -0.080272   0.000029   
0.100000     0.000900    0.042939    -0.041138   0.000002   
0.050000     0.000225    0.021257    -0.020807   0.000000   
0.020000     0.000036    0.008450    -0.008378   0.000000   
0.010000     0.000009    0.004216    -0.004198   0.000000   
0.005000     0.000002    0.002106    -0.002101   0.000000   
0.002000     0.000000    0.000842    -0.000841   0.000000   
0.001000     0.000000    0.000421    -0.000421   0.000000   
0.000500     0.000000    0.000210    -0.000210   -0.000000  
0.000200     0.000000    0.000084    -0.000084   0.000000   
0.000100     0.000000    0.000042    -0.000042   -0.000000  
#include<stdio.h>
#include<cmath>

typedef float myfloat;

void showbyh(myfloat h) {
  myfloat x0 = 1.0;
  myfloat y_e = cos(x0);
  
  // two points forward
  myfloat x_f[2] = {x0,x0+h};
  myfloat y_f[2] = {sin(x_f[0]), sin(x_f[1])};
  myfloat y_f_diff=(y_f[1]-y_f[0])/h;
  myfloat delta_f=y_e-y_f_diff;
  
  // two points backward
  myfloat x_b[2] = {x0-h,x0};
  myfloat y_b[2] ={sin(x_b[0]), sin(x_b[1])};
  myfloat y_b_diff=(y_b[1]-y_b[0])/h;
  myfloat delta_b=y_e-y_b_diff;
  
  // 3 points forward
  myfloat x[3] = {x0-h,x0,x0+h};
  myfloat y[3] ={sin(x[0]), sin(x[1]), sin(x[2])};
  myfloat y_diff3=(y[2]-y[0])/(2*h);
  myfloat delta3=y_e-y_diff3;
  
  // 5 points forward
  myfloat x1[5] = {x0-2*h,x0-h,x0,x0+h,x0+2*h};
  myfloat y1[5] = {sin(x1[0]), sin(x1[1]), sin(x1[2]), sin(x1[3]), sin(x1[4])};
  myfloat y_diff5=(8.0*(y1[3]-y1[1])-(y1[4]-y1[0]))/(12*h);
  myfloat delta5=y_e-y_diff5;
  
  printf("%8.6f\t%8.6f\t%8.6f\t%8.6f\t%8.6f\n\n", h, delta3, delta_f, delta_b, delta5);
  //std::cout << h << "\t" << delta3 << "\t" << delta_f << "\t" << delta_b << "\t" << delta5 << std::endl << std::endl;
}


int main()
{
  myfloat hs[12] = {0.5, 0.2, 0.1, 0.05, 0.02, 0.01, 0.005, 0.002, 0.001, 0.0005, 0.0002, 0.0001};
  for (int i = 0; i < 12; i ++) showbyh(hs[i]);
  return 0;
}

float results:

double results:

注意

  • 不同公式的精度:五点\(>\)三点\(>\)两点
  • 注意误差随步长 \(h\) 的减小先减小再增大

舍入误差

微分公式涉及两个很接近的 \(f\) 值相减。当步长过小时,计算机的舍入误差会使导数计算不准确。

合适步长

\(D(h),D(h/2)\)分别为步长为\(h, h/2\)的差商公式。则 \[\left|D(h)-D\left(\frac{h}{2}\right)\right| < \varepsilon\] 时的步长\(h/2\)就是合适的步长

1.1.2 High order derivatives

\[f_1 \equiv f(x_0 + h) = f(x_0) + hf'(x_0) + \frac{h^2}{2}f''(x_0)+\frac{h^3}{3!}f'''(x_0)+O(h^4)\] \[f_{-1} \equiv f(x_0 - h) = f(x_0) - hf'(x_0) + \frac{h^2}{2}f''(x_0) - \frac{h^3}{3!}f'''(x_0)+O(h^4)\]

\[\Rightarrow f''(x_0) = \frac{f(x_0 + h) - 2f(x_0) + f(x_0 - h)}{h^2} + O(h^2)\] Alternatively:

\[\begin{equation} \begin{aligned} f''(x_0) &= \frac{f'(x_0 + h) - f'(x_0)}{h} = \frac{[f(x_0 + h) - f(x_0)]/h - [f(x_0) - f(x_0 - h)] / h}{h^2} \\ & = \frac{f(x_0 + h) - 2f(x_0) + f(x_0 - h)}{h^2} + O(h^2) \end{aligned} \end{equation}\]

Exercise 1 Try to derive the 3rd order of difference.

Example 1 给定函数\(f(x)=e^{-x/5}\)在节点\(x_k = 1.74 + kh,(h=0.02, k=0,1,...,5)\)上的函数值 ,求函数\(f(x)\) 在内节点处的一阶和二阶微商值,与解析结果比较确定误差大小。

Solution 1.

节点坐标 1.76 1.78 1.80 1.82
一阶微商(数值) -0.1407 -0.1401 -0.1395 -0.1390
一阶微商(误差) -3.7508e-7 -3.7359e-7 -3.7209e-7 -3.7061e-7
二阶微商(数值) 0.0281 0.0280 0.0279 0.0278
二阶微商(误差) 3.7509e-8 3.7358e-8 3.7210e-8 3.7061e-8
% filename: chap1_example_1_differential.m
%数值微分,例题1
clear;clc
h=0.02;
x=1.74:0.02:1.84;
y=exp(-x./5);

%数值导数,如果内点过多,可以采用for循环定义
f_1=(y(3)-y(1))/(2*h) %1.76处的一阶导数,中心差分
f_2=(y(4)-y(2))/(2*h) %1.78处的一阶导数,中心差分
f_3=(y(5)-y(3))/(2*h) %1.80处的一阶导数,中心差分
f_4=(y(6)-y(4))/(2*h) %1.82处的一阶导数,中心差分

ff_1=(y(3)+y(1)-2*y(2))/(h^2)%1.76处的二阶导数
ff_2=(y(4)+y(2)-2*y(3))/(h^2)%1.78处的二阶导数
ff_3=(y(5)+y(3)-2*y(4))/(h^2)%1.80处的二阶导数
ff_4=(y(6)+y(4)-2*y(5))/(h^2)%1.82处的二阶导数

%解析导数与数值导数之差
R1_1=f_1+1/5*y(2) %1.76处的数值导数与解析导数之差,一阶导数
R1_2=f_2+1/5*y(3) %1.76处的数值导数与解析导数之差,一阶导数
R1_3=f_3+1/5*y(4) %1.76处的数值导数与解析导数之差,一阶导数
R1_4=f_4+1/5*y(5) %1.76处的数值导数与解析导数之差,一阶导数

R2_1=ff_1-1/25*y(2) %1.76处的数值导数与解析导数之差,二阶导数
R2_2=ff_2-1/25*y(3) %1.76处的数值导数与解析导数之差,二阶导数
R2_3=ff_3-1/25*y(4) %1.76处的数值导数与解析导数之差,二阶导数
R2_4=ff_4-1/25*y(5) %1.76处的数值导数与解析导数之差,二阶导数

Alternative:

% filename: chap1_example_1_differential_revised.m
%数值微分,例题1
clear;clc
h=0.02;
x1=1.74:0.02:1.84;
y=@(x)exp(-x/5);
%数值导数,如果内点过多,可以采用for循环定义
for i=1:length(x1)-2
    f(i)=(y(x1(i+2))-y(x1(i)))/(2*h); %一阶导数,中心差分
    ff(i)=(y(x1(i+2))+y(x1(i))-2*y(x1(i+1)))/(h^2);%二阶导数
    %解析导数与数值导数之差
    R(i)=f(i)+1/5*y(x1(i+1)); %数值导数与解析导数之差,一阶导数
    RR(i)=ff(i)-1/25*y(x1(i+1)); %数值导数与解析导数之差,二阶导数
end
f
R
ff
RR

1.1.3 Commands

1.1.3.1 Matlab

在Matlab中,没有直接求数值微分的命令,只有差分计算公式diff(f)gradient(f)

  • df=diff(f);求一元函数 \(f(x)\) 的两点向前差分df=f(2:n)−f(1:n−1) ,那么数值微分为df/dx;
  • gf=gradient(f); 求一元函数 \(f(x)\)的三点中心差分gf(2:𝑛−1)=(f(3:n)−f(1:n−2))/2,那么数值微分为gf/dx;
  • diff: 当\(f\)是向量时, df =f(2:n)−f(1:n−1); 不求最后一个点的差分。
  • gradient: 当\(f\)是向量时,
    • gf(2:n−1)=(f(3:n)−f(1:n−2))/2;
    • 而首端 gf(1)=f(2)−f(1),
    • 末端 gf(n)=f(n)−f(n−1);
%数值微分,例题1
clear;clc
h=0.02;
x=1.74:h:1.84;
y=exp(-x./5);

dy=gradient(y)/h  %一阶中心差分
ddy=diff(dy)/h  %二阶中心差分

1.1.3.2 Python

import numpy as np
arr = np.array([1, 2, 3, 4, 5])
# 计算一阶差分
first_diff = np.diff(arr)
# 计算二阶差分
second_diff = np.diff(arr, n=2)

dx, dy = np.gradient(z)
Code
import numpy as np
import matplotlib.pyplot as plt

x = np.linspace(-2.0 * np.pi, 2.0 * np.pi, 100)
y = x
xx, yy = np.meshgrid(x, y)
zz = np.sin(xx) * np.cos(yy)
dy, dx = np.gradient(zz)

plt.figure(figsize=(12, 3))
plt.subplot(131)
plt.imshow(zz)
plt.subplot(132)
plt.imshow(dx)
plt.subplot(133)
plt.imshow(dy)
plt.show()

Code
import sympy as sp
x, y = sp.symbols('x y')
z = sp.Function('z')
z = sp.sin(x) * sp.cos(y)

dx = sp.diff(z, x)
dy = sp.diff(z, y)

print(dx, ";", dy)
cos(x)*cos(y) ; -sin(x)*sin(y)

1.1.3.3 C

2 Integrals

2.1 Why numerical integrate?

Newton-Libniz equation

\[I = \int_a^b f(x) dx = F(b) - F(a)\] Requirement

  • 被积函数 \(f(x)\) 有解析表达式
  • 原函数 \(F(x)\) 为初等函数

2.2 Problems:

  1. \(f(x)\) 没有解析表达式
\(x\) 0.1 0.2 0.3 0.4 0.5
\(f(x)\) 4 4.5 5 8 8.5
  1. \(f(x)\)有解析表达式,但原函数不是初等函数,例如\[ \int_a^b e^{-x^2}dx ~~~~~~~~~~ \int_a^b \frac{\sin x}{x}dx \] 它们的原函数都不是初等函数
  1. \(f(x)\)为形式简单的初等函数,但其原函数表达式复杂;如:
\[\begin{eqnarray*} f(x) &=& x^2\sqrt{2x^2 + 3} \\ F(x) &=& \frac{1}{4}x^2\sqrt{2x^2 + 3} + \frac{3}{16}x\sqrt{2x^2 + 3} - \\ & &\frac{9}{16\sqrt{2}}\ln(\sqrt{2}x+x^2\sqrt{2x^2 + 3}) \end{eqnarray*}\]
  1. \(f(x)\)的表达式结构复杂,求原函数困难;

一维定积分的几何意义是曲边梯形的面积。从积分定义可知,定积分的基本分析方法是四步,即分割、近似、求和、取极限。

将区间 \([a, b]\) 分割为 \(n\) 等份,每个小区间的宽度为 \(h = (b - a) / n\).

积分转化为求和 \[ \int_a^bf(x)dx = \sum_{i = 0}^{n-1} \int_{x_i}^{x_{i+1}} f(x)dx \qquad(1)\]

矩形积分公式

将每个小的积分区间上的积分结果 \(\int_{x_i}^{x_{i+1}} f(x)dx\) 近似取为 \(f(x_i) \Delta x = f(x_i) h\) 则(Equation 1)变为 \[\Rightarrow~~~~~ \int_a^bf(x)dx = \sum_{i = 0}^{n-1}f(x_i)h.\]

显然可以看出,矩形公式的误差太大。下面介绍几个实用积分公式。

2.3 线性近似——梯形公式

在每个区间 \([x_i, x_{i+1}]\) 进行线性Lagrange插值

\[\begin{eqnarray*} \varphi(x) &=& f(x_i) \frac{x - x_{i+1}}{x_i - x_{i+1}} + f(x_{i+1}) \frac{x - x_{i}}{x_{i+1} - x_i} \\ &=& f(x_i) l_i(x) + f(x_{i + 1}) l_{i+1}(x) \end{eqnarray*}\]

\[\Rightarrow \int_{x_i}^{x_{i+1}} l_i(x) dx = \frac{h}{2}; ~~~~ \int_{x_i}^{x_{i+1}} l_{i+1}(x) dx = \frac{h}{2}\]

\[R(x) = f(x) - \varphi(x) = \frac{f''(\xi)}{2!}(x-x_0)(x-x_1) =O(h^2)\] \[ \Rightarrow \int_{x_i}^{x_{i+1}} f(x) dx = \int_{x_i}^{x_{i+1}}[\varphi(x) + O(h^2)] dx = \frac{h}{2}[f(x_i) + f(x_{i+1})] + O(h^3)\]

\[ \Rightarrow \int_a^b f(x) dx = \sum_{i=0}^{n-1} \int_{x_i}^{x_{i+1}}f(x) dx = \frac{h}{2}\sum_{i=0}^{n-1}[f(x_i) + f(x_{i+1})] + O(h^2)\]

复化梯度公式

\[f(x_0) = f(a), ~~~ f(x_n) = f(b), ~~~ x_i = a+ih\]

\[\begin{eqnarray*} \int_a^b f(x) dx &=& \frac{h}{2} [f(a) + 2\sum_{i=1}^{n-1}f(a+ih) + f(b)] + O(h^2) \\ &=& \frac{h}{2}[2\sum_{i=0}^nf(a+ih)-f(a)-f(b)]+O(h^2) \end{eqnarray*}\]

2.4 二阶多项式近似——辛普森公式

在区间 \([x_i, x_{i+1}]\) 进行二阶Lagrange插值

\[\begin{eqnarray*} f(x) &=& f(x_{i-1}) \frac{(x - x_{i})(x - x_{i+1})}{(x_{i-1} - x_{i})(x_{i-1} - x_{i+1})} + \\ & & f(x_{i}) \frac{(x - x_{i-1})(x - x_{i+1})}{(x_{i} - x_{i-1})(x_{i} - x_{i+1})} + \\ & & f(x_{i+1}) \frac{(x - x_{i-1})(x - x_{i})}{(x_{i+1} - x_{i-1})(x_{i+1} - x_{i})} + O(h^3) \\ &=& l_{i-1}(x)f(x_{i-1}) + l_{i}(x)f(x_{i}) + l_{i+1}(x)f(x_{i+1}) + O(h^3) \end{eqnarray*}\]

\[ \int_{x_{i-1}}^{x_{i+1}} l_{i-1}(x)dx = \frac{h}{3}; ~~ \int_{x_{i-1}}^{x_{i+1}} l_{i}(x)dx = \frac{4h}{3}; ~~ \int_{x_{i-1}}^{x_{i+1}} l_{i+1}(x)dx = \frac{h}{3}; \]

\[\Rightarrow \int_{x_{i-1}}^{x_{i+1}}f(x)dx = \frac{h}{3}[f(x_{i-1})+4f(x_i)+f(x_{i+1})]+O(h^4)\]

Tip

\[\begin{eqnarray*} \int_a^bf(x)dx &=& \sum_{j=0}^{n/2-1}\int_{x_{2j}}^{x_{2j+2}}f(x)dx \\ &=& \frac{h}{3}\sum_{j=0}^{n/2-1}[f(x_{2j})+4f(x_{2j+1})+f(x_{2j+2})] + O(h^3) \end{eqnarray*}\]

复化Simpson公式

\[h = (b-a) / 2n, ~~ x_k = a+kh, ~~ (k=0,1,...2n)\]

\[\int_a^bf(x)dx = \frac{h}{3}[f(a)+4\sum_{k=0}^{n-1}f(a+(2k+1)/h)+2\sum_{k=1}^{n-1}f(a+2kh)+f(b)]\]

Exercise 2 \[\int_0^1e^xdx = 1.718282\]

Solution 2.

% filename: chap1_ex_integration.m 
%数值积分求解e^xdx;
format long
clear
clc
%精确解
y_a=exp(1)-1;
%y_aa=vpa(exp(1)-1,7) %vpa函数控制输出有效数字的位数
%------------------------------------
n=64;
h=1/n;
x=[0:h:1];
f=exp(x);
%梯形法
y_t=h/2*(-f(1)-f(n+1)+2*sum(f));
y_tt=vpa(y_t,7)
error_1=vpa(y_a-y_t,7)  %梯形法下的误差
%simpson法
ff_2=0;
ff_4=0;
for k=0:(n/2-1)
    ff_1=4*f(2*k+2);
    ff_2=ff_2+ff_1; %simpson求和项第二项
end
for k=1:(n/2-1)
    ff_3=2*f(1+2*k);
    ff_4=ff_4+ff_3; %simpson求和项第三项
end               
y_s=h/3*(f(1)+f(n+1)+ff_2+ff_4)
error_2=vpa(y_a-y_s,7)%simpson方法下的误差
%simpson积分公式的另一种简单写法
y_s_1 = (4*sum(f(2:2:n))+2*sum(f(3:2:n-1))+f(1)+f(n+1))*h/3 %simpson积分公式
error_3=vpa(y_a-y_s_1,7)%simpson方法下的误差



2.4.1 more advanced

选取更多的点,进行更高阶的插值可以得到更高阶的算法,如

simpson 3/8 算法(三阶插值)

\[\int_{x_0}^{x_3} f(x)dx = \frac{3h}{8}[f_0 + 3f_1 + 3f_2 + f_3]+O(h^5)\]

\(h = (b-a)/3n, x_k = a+kh, (k = 0, 1, ...3n)\)

\[ \int_{x_0}^{x_3} f(x)dx = \frac{3h}{8}\left\{f(a) + 3\sum_{k=0}^{n-1}[f(a+(3k+1)h)+f(a+(3k+2)h)] + 2\sum_{k=0}^{n-1}f(a+3kh) + f(b)\right\} \]

Bode规则(四阶插值)

\[\int_{x_0}^{x_4} f(x)dx = \frac{2h}{45}[7f_0 + 32f_1 + 12f_2 + 32f_3+7f_4]+O(h^7)\]

利用梯形算法,simpson方法,simpson 3/8法计算

%filename: chap1_example_2_integration.m
clear
clc
%例题2,数值积分
format long

n=150;         
%将区间等分成多少段,simpson法要求是偶数,simpson 3/8法要求是3的整数倍
h=1/n;
x=0:h:1;
y=4./(1+x.^2);

m=length(x); 
%梯形法
F_1=h/2*(-y(1)-y(m)+2*sum(y))
F_1_error=F_1-pi    %误差

ff_2=0;
ff_4=0;
for k=0:(n/2-1)
    ff_1=4*y(2*k+2);
    ff_2=ff_2+ff_1; %simpson求和项第二项
end
for k=1:(n/2-1)
    ff_3=2*y(1+2*k);
    ff_4=ff_4+ff_3; %simpson求和项第三项
end    
F_2=h/3*(y(1)+y(m)+ff_2+ff_4)    %simpson法
F_2_error=F_2-pi               %误差
%simpson积分公式的另一种简单写法
y_s_1 = (4*sum(y(2:2:n))+2*sum(y(3:2:n-1))+y(1)+y(n+1))*h/3 %simpson积分公式
error_3=y_s_1-pi %simpson方法下的误差
%simpson 3/8法
y_s_2 = (3*sum(y(2:3:n-1))+3*sum(y(3:3:n))+2*sum(y(4:3:n-2))+y(1)+y(n+1))*h*3/8
error_4=y_s_2-pi %simpson 3/8法下的误差

%注意:由于runge现象的出现,simpson 3/8法的误差比simpson法要大

    

过高阶的插值可能导致严重的 振荡行为,从而给出被积函数不准确的插值。所以为了得到更高精度,往往用 低阶插值,同时减小 \(h\)

Runge’s phenomenon

Code
import numpy as np

import matplotlib.pyplot as plt

def gety(x):
  return  1 / (1 + 25 * x ** 2)

x = np.linspace(-1, 1, 10)
xx = np.linspace(-5, 5, 500)
yy = gety(xx)
y = gety(x)
z1 = np.polyfit(x, y, 5)
z2 = np.polyfit(x, y, 10)
p1 = np.poly1d(z1)
p2 = np.poly1d(z2)
plt.plot(xx, yy, 'r-', label = 'original')
plt.plot(xx, p1(xx), 'b-v', label = 'fit - 5th order')
plt.plot(xx, p2(xx), 'k-v', label = 'fit - 10th order')
plt.legend()
plt.grid(ls = '-.')
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.xlim([-1.3, 1.3])
plt.ylim([-0.6, 2])
plt.show()

2.5 Abnormal integration

  • 反常积分分为两类:
    • 积分区间有限,在积分区间内被积函数有奇点
    • 积分区间为无限
  • 通过积分变量的变换,将反常积分变换为普通积分
  1. 积分区间内含有奇点的积分 \(\int_0^1 dx (1-x^2)^{-1/2}g(x)\)
  • \(x = 1\) 处有一个奇点,假设函数\(g\)在这点的值有限,则积分为有限值。 \(t = (1 - x)^{1/2} ~~\Rightarrow ~~ 2\int_0^1 dt (2 - t^2)^{-1/2}g(1-t^2)\)
  1. 无限区间的积分 \(\int_1^\infty dx x^{-2} g(x)\)
  • \(g(x)\)\(x\) 很大时趋于常数,积分为有限值。 \(t = x^{-1} ~~ \Rightarrow ~~ \int_0^1 dt g(t^{-1})\)

2.6 Matlab commands

quad

>> help quad
 quad - (不推荐)以自适应 Simpson 积分法计算数值积分
    此 MATLAB 函数 使用递归自适应 Simpson 积分法逼近函数 fun 从 a 到 b 的积分:
    
    语法
      q = quad(fun,a,b)
      q = quad(fun,a,b,tol)
      q = quad(fun,a,b,tol,trace)
      [q,fcnEvals] = quad(___)

    输入参数
      fun - 被积函数
        函数句柄
      a - 积分范围(以两个参数指定)
        标量
      b - 积分范围(以两个参数指定)
        标量
      tol - 绝对误差容限
        [] 或 1e-6 (默认值) | 标量
      trace - 切换诊断信息
        非零标量

    输出参数
      q - 积分的值
        标量
      fcnEvals - 函数计算次数
        标量

dblquad

>> help dblquad
 dblquad - (不推荐)矩形区域上的二重积分的数值计算
    此 MATLAB 函数 调用 quad 函数来计算 xmin <= x <= xmax,ymin <= y <= ymax 矩
    形区域上的二重积分 fun(x,y)。输入参数 fun 是一个函数句柄,它接受向量 x,标量 y,并返
    回被积函数值的向量。

    建议不要使用 dblquad。请对代码进行相应更改后改用 'integral2'。

    语法
      q = dblquad(fun,xmin,xmax,ymin,ymax)
      q = dblquad(fun,xmin,xmax,ymin,ymax,tol)
      q = dblquad(fun,xmin,xmax,ymin,ymax,tol,method)

integral

>> help integral
 integral - 数值积分
    此 MATLAB 函数 使用全局自适应积分和默认误差容限在 xmin 至 xmax 间以数值形式为函数
    fun 求积分。

    语法
      q = integral(fun,xmin,xmax)
      q = integral(fun,xmin,xmax,Name,Value)

    输入参数
      fun - 被积函数
        函数句柄
      xmin - x 的下限
        实数 | 复数
      xmax - x 的上限
        实数 | 复数

    名称-值参数
      AbsTol - 绝对误差容限
        1e-10 (默认值) | 非负实数
      RelTol - 相对误差容限
        1e-6 (默认值) | 非负实数
      ArrayValued - 数组值函数标志
        false 或 0 (默认值) | true 或 1
      Waypoints - 积分路点
        向量

integral3

>> help integral3
 integral3 - 对三重积分进行数值计算
    此 MATLAB 函数 在区域 xmin ≤ x ≤ xmax、ymin(x) ≤ y ≤ ymax(x) 和 zmin(x,y) ≤
    z ≤ zmax(x,y) 逼近函数 z = fun(x,y,z) 的积分。

    语法
      q = integral3(fun,xmin,xmax,ymin,ymax,zmin,zmax)
      q = integral3(fun,xmin,xmax,ymin,ymax,zmin,zmax,Name,Value)

    输入参数
      fun - 被积函数
        函数句柄
      xmin - x 的下限
        实数
      xmax - x 的上限
        实数
      ymin - y 的下限
        实数 | 函数句柄
      ymax - y 的上限
        实数 | 函数句柄
      zmin - z 的下限
        实数 | 函数句柄
      zmax - z 的上限
        实数 | 函数句柄

    名称-值参数
      AbsTol - 绝对误差容限
        非负实数
      RelTol - 相对误差容限
        非负实数
      Method - 积分法
        'auto' (默认值) | 'tiled' | 'iterated'

\[ V = \int_{x = a}^{x = b} \int_{y = -\sqrt{3/2-x^2}}^{y = \sqrt{3/2-x^2}} \int_{z = x^2 + y^2}^{z = 3-x^2-y^2} dzdydx \]

z1 = @(x, y) 3 - x.^2 - y.^2;
z2 = @(x, y) x.^2 + y.^2;

y1 = @(x) sqrt(1.5 - x.^2);
y2 = @(x) -sqrt(1.5 - x.^2);

fun = @(x, y, z) ones(size(x));
%fun = @(x,y,z) y.*sin(x)+z.*cos(x)

integral3(fun, -1, 1, y2, y1, z2, z1)

2.7 Python commands

Code
#import scipy
#help(scipy.integrate)

3 Roots

阿贝尔-鲁菲尼定理:高于四次的一般代数方程没有一般形式的代数解

\[f(x) = a_nx^n + a_{n-1}x^{n-1}+...+a_1x + a_0~~~(a_i\neq 0)\]

对于更为复杂的方程,如非线性方程 \[ \cos x \cosh x + 1 = 0\] 很难利用解析的方法求得方程的根。 后面很多章节,如常微分方程的边值问题的打靶法求解等问题中也将用到方程的数值求根问题。 作为基础我们只讨论一元非线性方程的数值求根问题。

When to stop?

\[|x_{k+1} - x_k| < \varepsilon_1\]

3.1.1 Implementation

Binary search

%%{ init : { "theme" : "dark", "flowchart" : { "curve" : "linear" }}}%%
flowchart LR

A["`interval [a, b] f(a)f(b)<0; 
required accuracy e`"] --> B["let x = (a+b)/2"] --> C{"|b-a|< e"} -->|"true"| D["return x"]

C -->|"false"| E{"|f(a)f(x)<0|"} 

E --> |"true"|F["b=x"] --> A
E --> |"false"|G["a=x"] --> A

Pros & Cons:

  • 👍 简单易用
  • 👍 稳妥,对\(f(x)\)要求不高,只要连续即可收敛
  • 👎 收敛速度慢,且只能求单根

Exercise 3 用二分法求方程 \(2x^3−5x−1=0\) 在区间\([1,2]\)内的实根,要求误差限为\(\varepsilon \leq 10^{-12}\).

Solution 3. 解:令 \(f(x)=2x^3-5x-1\)

\(f(1)<0, f(2)>0\)\(I_0=[1,2], x_0=(1+2)/2=1.5\)

因为 \(f(x_0)f(1)>0\)\(I_1=[1.5,2], x_1=(1.5+2)/2=1.75\)

\(f(x_1)f(1.5)<0\)\(I_2=[1.5,1.75], x_2=(1.5+1.75)/2=1.625\)

……

\(I_6=[1.681875, 1.6875]\)

\(I_7=[1.671875, 1.679688]\)

\(b_7-a_7=0.7813\times10^{-2}<10^{-2}\)

\(x^*\approx x_7=1.6758\)

%filename: chap1_example_3_bisection.m
%利用二分法求解方程的根
clear all
close all
clc
format long
%画曲线
x1=[1:0.0001:2];
y=@(x1) 2*x1.^3-5*x1-1;  %定义句柄函数
plot(x1,y(x1),'r',x1,0,'b')
hold on

%用二分法求根
a=1.0;b=2.0;delta=0.01;
n=0; %搜索次数
if y(a)*y(b)> 0
    fprintf('方程在[%d,%d]区间上无根',a,b)
else    
    while abs(b-a)>delta    %判断标准
        x=(a+b)/2;    %二分法
        n=n+1;
        if y(x)*y(a)<0;%
            b=x;
        else
            a=x;
        end
        plot(a,y(a),'o',b,y(b),'*')
        pause(1)
    end
    x=(a+b)/2;
    fprintf('方程在[1,2]区间上的根为'),x
    fprintf('搜索次数为'),n
end
hold off

    

3.2 Newton-Raphson method

设一元方程\(f(x) = 0\)的非线性函数\(f(x)\)连续可微。我们在其解的近似值\(x_k\) 附近将\(f(x)\)作Taylor展开

\[f{\left(x_{k} \right)} + \left(x - x_{k}\right) \left. \frac{d}{d \xi_{1}} f{\left(\xi_{1} \right)} \right|_{\substack{ \xi_{1}=x_{k} }} + \frac{\left(x - x_{k}\right)^{2} \left. \frac{d^{2}}{d \xi_{1}^{2}} f{\left(\xi_{1} \right)} \right|_{\substack{ \xi_{1}=x_{k} }}}{2} + \frac{\left(x - x_{k}\right)^{3} \left. \frac{d^{3}}{d \xi_{1}^{3}} f{\left(\xi_{1} \right)} \right|_{\substack{ \xi_{1}=x_{k} }}}{6} + O\left(\left(x - x_{k}\right)^{4}; x\rightarrow x_{k}\right) = 0\]

Linear part:

\[x^* \simeq x_k - \frac{f(x_k)}{f'(x_k)}\]

Geometrically:

\[x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)}, ~~~ k = 0,1,2,...\]

stop condition: \[|f(x_k)| < \delta\] && \[|x_{k+1} - x_k| < \varepsilon_1\]

take \(k = 0, x_1 =\simeq x_0 - \frac{f(x_0)}{f'(x_0)}\)

Code
plt.figure(figsize=(14, 4))

def f(x):
    return 1e-3 * (x**3 - 2*x - 5)

x = np.linspace(-10, 10, 100)

ax = plt.subplot(121)
ax.plot(x, f(x), label = r'$f(x)$')
ax.plot(-3, f(-3), 'ko')
ax.text(-5, 0.2, r'$x_k, \delta$', color = 'k')
ax.plot([-10, 10], [f(-3), f(-3)], 'k-.')
ax.text(2.094, f(2.094), r'$x^*$', color = 'red')
ax.plot(2.094, f(2.094), 'rs')
ax.legend()

# 隐藏坐标轴
ax.spines['left'].set_position('zero')
ax.spines['bottom'].set_position('zero')
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')



def f(x):
    return np.tanh(x)
ax = plt.subplot(122)

ax.plot(x, f(x), label = r'$f(x)$')
ax.plot(2, f(2), 'ko')
ax.text(2, f(2), r'$x_k, \delta$', color = 'k')
ax.plot([0, 2], [f(2), f(2)], 'k-.')

ax.text(0, f(0), r'$x^*$', color = 'red')
ax.plot(0, f(0), 'rs')
ax.legend()

# 隐藏坐标轴
ax.spines['left'].set_position('zero')
ax.spines['bottom'].set_position('zero')
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')


plt.show()

只用\(|f(x)|<\delta\)可能造成误差偏大 ~~~~~~~~ 只用\(x_{k+1}-x_k < \varepsilon_1\)可能造成误差偏大

Newton iteration

%%{ init : { "theme" : "neutral", "flowchart" : { "curve" : "linear" }}}%%

graph LR

A["`init x(0), error d, e; 
k = 0`"]
B["x(k+1) = x(k) - f(k) / f'(k)"]
C{"`|f(k)| < d and 
|x(k+1) - x(k)| < e`"}
D["k = k + 1"]
E["stop"]
A --> B --> C --> |true| E 
C --> |false| D --> B 

Exercise 4 利用牛顿迭代法求解 \(f(x) = e^x -\tan^{-1}x - 1.5\)的零点。初始点\(x_0 = -7.0\);并用二分法求\([−16,−7]\)上的解,比较两方法的收敛速度。误差标准取为\(10^{−10}\).

Solution 4.

Code
import numpy as np
import matplotlib.pyplot as plt

# 定义函数及其导数
def f(x):
    return np.exp(x) - 1.5 - np.arctan(x)

def df(x):
    return np.exp(x) - 1/(x**2 + 1)


# 创建x值范围
x = np.linspace(-15, -5, 1000)

# 计算y值
y = f(x)

# 创建图形并绘制曲线
plt.figure(figsize=(4, 6))
plt.plot(x, y, label='f(x) = e^x - 1.5 - arctan(x)', color='blue')


iterations = 3  # 迭代次数
xx = np.zeros(iterations)
xx[0] = -7.0 # 初始猜测值

# 进行牛顿迭代
for i in range(iterations - 1):
    xx[i + 1] = xx[i] - f(xx[i]) / df(xx[i])
    x0 = xx[i]
    y0 = f(x0)
    k = df(x0)
    # 绘制迭代路径
    x_path = np.linspace(x0 - 4, x0 + 4, 20)
    y_path = k * x_path + y0 - k * x0
    plt.plot(x_path, y_path, color='red')
    plt.plot([x0, x0], [y0, 0], color = 'k', ls = '--')

ax = plt.gca()
# 隐藏坐标轴
ax.spines['left'].set_position(('data', -15))
ax.spines['bottom'].set_position('zero')
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')


plt.ylim([-0.08, 0.01])
plt.xlim([-16, -6])

# 添加图例
#plt.legend()
# 添加标题、坐标轴标签和图例
plt.xlabel('x')
plt.ylabel('f(x)')
# 显示图形
plt.show()

Sol: \(f(x_0) = -0.702\times 10^{-1}, f'(x) = e^x - (1 + x^2)^{-1}\) \[ x_{k+1} = x_0 - \frac{f(x_0)}{f'(x_0)} \] with \(|f(x)| \leq 10^{-10}\)

\(k\) \(x\) \(f(x)\)
0 -7.0000 -0.0701888
1 -10.6771 -0.0225666
2 -13.2792 -0.00436602
3 -14.0537 -0.00023902
4 -14.1011 -7.99585e-007
5 -14.1013 -9.00833e-012
%filename: chap1_example_4_bisection_newton.m
%利用二分法和Newton-Raphson方法求解方程的根;
clear
clc
close all
format long
%画曲线
x1=[-16:0.01:-7];
y=@(x1)exp(x1)-1.5-atan(x1);
plot(x1,y(x1),'r',x1,0,'b')

delta=10^-10; %delta为误差标准,注意此次用函数值作为误差标准
delta1=10^-5;
tic
%用二分法求根
a=-16;b=-7;
n=0; %搜索次数
x=(a+b)/2;
while abs(a-b)>delta  %判断标准
    n=n+1;
    if y(x)*y(a)<0
        b=x;
    else
        a=x;
    end
    x=(a+b)/2;     %二分法
end
toc
fprintf('二分法求得方程在[-16,-7]区间上的根为'),x
fprintf('x处函数值为'),y(x)
fprintf('搜索次数为'),n

hold on
%用Newton法求根
x0=-7.0;
k=0; %迭代次数
tic
x1=x0-y(x0)/(exp(x0)-1/(1+x0^2)); %Newton迭代公式
while abs(y(x0))>delta | abs(x1-x0)>delta1 %判断标准
    x0=x1;
    x1=x0-y(x0)/(exp(x0)-1/(1+x0^2)); %Newton迭代公式
    k=k+1;
    %f=exp(x1)-1.5-atan(x1); %输出此时的函数值
    %plot(x1,f,'*')
    %pause(1)
end
toc
hold off
fprintf('Newton法求得的方程的根为'),x1
fprintf('x0处函数值为'),y(x1)
fprintf('迭代次数为'),k

    

注:Newton-Raphson Method 收敛性依赖于\(x_0\) 的选取。

3.2.0.1 Pros && cons

👍 收敛快,稳定性好,精度高等优点,是求解非线性方程的有效方法之一

👎 每次迭代均需计算函数值与导数值,计算量大。当导数值提供有困难时, Newton法无法进行。收敛性依赖初值选择。

3.3 Secant method

弦割法在Newton-Raphon法的效率与必须计算导数的麻烦间提供了一种折中。

切线斜率 \(\simeq\) 割线斜率 \(\Rightarrow f'(x_k) \simeq \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}}\) 向后差分

代入Newton迭代公式

\[\Rightarrow x_{k+1} = x_k - \frac{f(x_k)(x_k - x_{k-1})}{f(x_k) - f(x_{k-1})}\]

任意2个初值 \(x_0\)\(x_1\)可以启动这个递推关系。

3.3.1 When to stop?

3.3.2 Implementation

Tip

\[\mathrm{Core}: ~~~x_{k+1} = x_k - \frac{f(x_k)(x_k - x_{k-1})}{f(x_k) - f(x_{k-1})}\]

%%{ init : { "theme" : "neutral", "flowchart" : { "curve" : "linear" }}}%%

graph LR

A("`init x(0), x(1), error d, e; 
k = 0`")
C["Core"]
B{"`|f(k)| < d and 
|x(k+1) - x(k)| < e`"}
D["k = k + 1"]
E["stop"]
A --> B --> |true| E 
B --> |false| C --> D --> B 

Exercise 5 用弦割法求方程 \(xe^x - 1 = 0\)\(x = 0.5\)附近的根(\(\varepsilon = 10^{-4}\).

Solution 5. take \(x_0 = 0.5, x_1 = 0.6\):

\(k\) \(x_k\) \(x_k - x_{k-1}\)
0 0.5
1 0.6
2 0.56532 -0.03468
3 0.56709 0.00178
4 0.56714 0.00001

Therefore \(x^* \simeq 0.56714\).

%filename: chap1_example_5_bisection_newton_secant.m 
%利用二分法、Newton-Raphson方法和弦割法求解方程的根.
%通过此例题可以验证Newton法和弦割法对初始点的依赖
clear all
close all
clc

format long
%画曲线
xx=[-5:0.01:2];
y=@(x)x.*exp(x)-1;
plot(xx,y(xx),'r',xx,0,'b')

delta=10^-8; %delta为误差标准,注意此次用自变量间距作为误差标准
%用二分法求根
a=0;b=1;
n=0; %搜索次数
while abs(b-a)>delta    %判断标准
    x=(a+b)/2;     %二分法
    n=n+1;
    if y(x)*y(a)<0
        b=x;
    else
        a=x;
    end
end
x=(a+b)/2;
fprintf('二分法求得方程在[0,1]区间上的根为'),x
fprintf('x处函数值为'),y(x)
fprintf('搜索次数为'),n


%用Newton法求根
x0=-4;
k=1; %迭代次数
x1=x0-(x0*exp(x0)-1)/(exp(x0)+x0*exp(x0)); %为了用自变量判断标准,先做一次Newton迭代
while abs(x1-x0)>delta | abs(y(x1))>delta   %判断标准
    x2=x1-y(x1)/(exp(x1)+x1*exp(x1)); %Newton迭代公式
    x0=x1;
    x1=x2;
    k=k+1;
    f=x1*exp(x1)-1; %输出此时的函数值
end
fprintf('Newton法求得的方程的根为'),x1
fprintf('x1处函数值为'),y(x1)
fprintf('迭代次数为'),k

%弦割法求根
x3=5;x4=6; %两个启动点
m=1;
while abs(x3-x4)>delta | abs(y(x4))>delta %判断标准
    x5=x4-y(x4)*(x4-x3)/(y(x4)-y(x3)); %弦割法的迭代公式
    m=m+1;
    %x5-x4
    x3=x4;
    x4=x5;
end
fprintf('弦割法求得的方程的根为'),x5
fprintf('x1处函数值为'),y(x5)
fprintf('迭代次数为'),m



    

3.3.3 Comparison

Code
#!/usr/bin/env python

"""Implements a variety of root-finding methods, including providing estimates
of convergence rates.

USAGE: (from shell prompt)
    python ./findroot.py

AUTHOR:
    Jonathan Senning <jonathan.senning@gordon.edu>
    Gordon College
    Python Version: March 8, 2008
"""

import math

#------------------------------------------------------------------------------

def bisect( f, a, b, tol, maxiter ):
    """
    Usage: ( x, rate ) = bisect( f, a, b, tol, maxiter )

    This function uses bisection to solve f(x) = 0.

    This function requires that the function f(x) be already defined, and
    that its name be passed in as a string as the first argument.
    """

    def sign( x ):
        if x < 0:
            return -1
        elif x > 0:
            return 1
        else:
            return 0

    x  = a
    fa = f( a )
    fb = f( b )

    # Make sure that (according to the Intermediate Value Theorem) the
    # specified interval does contain a root.

    if sign( fa ) == sign( fb ):
        print ("Interval [%f,%f] may not contain a root" % ( a, b ))

    # Initialize for the search.  Note that we try and minimize the number
    # of evaluations of f(x).  It is initially evaluated twice before the
    # iteration begins and then once during each iteration.  These extra
    # points are needed to start off the error calculation which compares
    # estimates of solutions just computed with previous estimates.

    x0 = x + 4.0 * tol
    x1 = x + 8.0 * tol

    k = 0

    while k <= maxiter:
        x2, x1, x0 = ( x1, x0, x )
        x = ( a + b ) / 2.0
        fx = f( x )
        max_error = abs( b - a ) / 2.0
        print ("%2d %18.11e %18.11e %18.11e %18.11e" % ( k, a, b, x, max_error ))
        k = k + 1

        if max_error < tol:
            break

        if sign( fa ) == sign( fx ):
            a, fa = ( x, fx )
        else:
            b = x

    if k > maxiter:
        print ("Error: exceeded %d iterations" % maxiter)

    rate = math.log( abs((x - x0) / (x0 - x1)) ) / \
           math.log( abs((x0 - x1) / (x1 - x2)) )

    return ( x, rate )

#------------------------------------------------------------------------------

def newton( f, df, x, tol, maxiter ):
    """
    Usage: ( x, rate ) = newton( f, df, x, tol, maxiter )

    This function performs a Newton-Raphson iteration to solve f(x) = 0.

    This function requires that the functions f(x) and df(x) (the first
    derivative of f with respect to x) both be already defined, and that
    their  names be passed in as a strings as the first two arguments.
    """
    x1 = x + 8.0 * tol
    x0 = x + 4.0 * tol

    k = 0

    while k <= maxiter and abs( x - x0 ) >= tol:
        x2, x1, x0 = ( x1, x0, x )
        x = x - f( x ) / df( x )
        print ("%2d %18.11e %18.11e" % ( k, x, abs( x - x0 ) ))
        k = k + 1

        if k > maxiter:
            print ("Error: exceeded %d iterations" % maxiter)

    rate = math.log( abs((x - x0) / (x0 - x1)) ) / \
           math.log( abs((x0 - x1) / (x1 - x2)) )

    return ( x, rate )

#------------------------------------------------------------------------------

def secant( f, x, tol, maxiter ):
    """
    Usage: ( x, rate ) = secant( f, x, tol, maxiter )

    This function performs a secant iteration to solve f(x) = 0.

    This function requires that the function f(x) be already defined, and
    that its name be passed in as a string as the first argument.
    """

    # These extra points are needed to start off the error calculation which
    # compares estimates of solutions just computed with previous estimates.

    x1 = x + 8.0 * tol
    x0 = x + 4.0 * tol

    fx0 = f(x0 )

    k = 0

    while k <= maxiter and abs( x - x0 ) >= tol:
        x2, x1, x0, fx1 = ( x1, x0, x, fx0 )
        fx0 = f( x0 )
        x = x0 - fx0 * ( x0 - x1 ) / ( fx0 - fx1 )
        print ("%2d %18.11e %18.11e" % ( k, x, abs( x - x0 ) ))
        k = k + 1

    if k > maxiter:
        print ("Error: exceeded %d iterations" % maxiter)

    rate = math.log( abs((x - x0) / (x0 - x1)) ) / \
           math.log( abs((x0 - x1) / (x1 - x2)) )

    return ( x, rate )

#------------------------------------------------------------------------------

def fixedpoint( f, x, a, tol, maxiter ):
    """
    Usage: ( x, rate ) = fixedpoint( f, x, a, tol, maxiter )

    This function performs a fixed point iteration to solve f(x) = 0.  It
    constructs a function g(x) = x + a * f(x) so that p = g(p) when
    f(p) = 0.  The value of "a" is choosen to speed convergence -- it
    should make g'(p) = 0 where p is the solution of f(p) = 0.

    This function requires that the function f(x) be already defined, and
    that its name be passed in as a string as the first argument.
    """

    # These extra points are needed to start off the error calculation which
    # compares estimates of solutions just computed with previous estimates.

    x1 = x + 8.0 * tol
    x0 = x + 4.0 * tol

    k = 0

    while k <= maxiter and abs( x - x0 ) >= tol:
        x2, x1, x0 = ( x1, x0, x )
        x = x + a * f( x )
        print ("%2d %18.11e %18.11e" % ( k, x, abs( x - x0 ) ))
        k = k + 1

    if k > maxiter:
        print ("Error: exceeded %d iterations" % maxiter)

    rate = math.log( abs((x - x0) / (x0 - x1)) ) / \
           math.log( abs((x0 - x1) / (x1 - x2)) )

    return ( x, rate )

#------------------------------------------------------------------------------

def steffensen( f, x, a, tol, maxiter ):
    """
    Usage: ( x, rate ) = steffensen( f, x, a, tol, maxiter )

    This function performs a steffensen's iteration to solve f(x) = 0.
    It constructs a function g(x) = x + a * f(x) so that p = g(p) when
    f(p) = 0.  The value of "a" is choosen to speed convergence -- it
    should make g'(p) = 0 where p is the solution of f(p) = 0.

    This function requires that the function f(x) be already defined, and
    that its name be passed in as a string as the first argument.
    """

    # These extra points are needed to start off the error calculation which
    # compares estimates of solutions just computed with previous estimates.

    x1 = x + 8.0 * tol
    x0 = x + 4.0 * tol

    k = 0

    while k <= maxiter and abs( x - x0 ) >= tol:
        x2, x1, x0 = ( x1, x0, x )
        p1 = x  + a * f( x )
        p2 = p1 + a * f( p1 )
        x = x - ( ( p1 - x )**2 ) / ( p2 - 2 * p1 + x )
        print ("%2d %18.11e %18.11e" % ( k, x, abs( x - x0 ) ))
        k = k + 1

    if k > maxiter:
        print ("Error: exceeded %d iterations" % maxiter)

    rate = math.log( abs((x - x0) / (x0 - x1)) ) / \
           math.log( abs((x0 - x1) / (x1 - x2)) )

    return ( x, rate )

#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------

if __name__ == "__main__":
    """
    J. R. Senning <jonathan.senning@gordon.edu>
    Gordon College
    Python Version: March 8, 2008

    This program uses five different root-finding algorithms to solve
    f(x) = 0.
    """

    # Maximum number of iterations

    maxiter = 100

    # Tolerance for all algorithms.  This value determines how accurate
    # the final estimate of the root will be.

    tol = 1e-8

    # ---------- Define f(x), df(x) and the interval to search -------------

    def  f(x): return 1 - x * math.exp( x )
    def df(x): return -( 1 + x ) * math.exp( x )

    a, b = ( 0.0, 2.0 )
    first_guess = ( a + b ) / 2.0

    print ("************************************************************")
    print ("Finding root of f(x) = 1 - x * exp(x) on [%f,%f]" % ( a, b ))
    print ("************************************************************")

    # ======================================================================
    # ---------- Bisection Method ------------------------------------------
    # ======================================================================

    print ("\n-------- Bisection Method ---------------------------\n")
    x, rate = bisect( f, a, b, tol, maxiter )
    print ("root = ", x)
    print ("Estimated Convergence Rate = %5.2f" % rate)

    # ======================================================================
    # ---------- Newton's method -------------------------------------------
    # ======================================================================

    print ("\n-------- Newton's Method ----------------------------\n")
    x, rate = newton( f, df, first_guess, tol, maxiter )
    print ("root = ", x)
    print ("Estimated Convergence Rate = %5.2f" % rate)

    # ======================================================================
    # ---------- Secant method ---------------------------------------------
    # ======================================================================

    print ("\n-------- Secant Method ------------------------------\n")
    x, rate = secant( f, first_guess, tol, maxiter )
    print ("root = ", x)
    print ("Estimated Convergence Rate = %5.2f" % rate)

    # ======================================================================
    # ---------- Fixed point method ----------------------------------------
    # ======================================================================

    # The third parameter to this function should ideally be -1/df(r) where 
    # r is the exact root and df(r) is the value of the first derivitive of
    # f evaluated at r.  We can approximate df(f) with the centered
    # difference formula
    #               ( f( x + tol ) - f( x - tol ) )
    #       df(r) = -------------------------------
    #                          2 * tol
    #
    # so that -1/df(r) = 2 * tol / ( f(x-tol) - f(x+tol) )
    #
    # For the "exact" root, we use the root returned by the last method...
    # This is, of course, a little bogus since if we already have a good
    # estimate of the root we don't need another one.  It illustrates,
    # however, that a good choice of this parameter can really speed up
    # convergence to the root.

    dfr = 2 * tol / ( f( x - tol ) - f( x + tol ) )

    print ("\n-------- Fixed-Point Method -------------------------\n")
    x,  rate = fixedpoint( f, first_guess, dfr, tol, maxiter )
    print ("root = ", x)
    print ("Estimated Convergence Rate = %5.2f" % rate)

    # ======================================================================
    # ---------- Steffensen's method ---------------------------------------
    # ======================================================================

    #
    # The value of dfr is computed above for the fixedpoint iteration.
    #

    print ("\n-------- Steffensen's Method ------------------------\n")
    x, rate = steffensen( f, first_guess, dfr, tol, maxiter )
    print ("root = ", x)
    print ("Estimated Convergence Rate = %5.2f" % rate)
************************************************************
Finding root of f(x) = 1 - x * exp(x) on [0.000000,2.000000]
************************************************************

-------- Bisection Method ---------------------------

 0  0.00000000000e+00  2.00000000000e+00  1.00000000000e+00  1.00000000000e+00
 1  0.00000000000e+00  1.00000000000e+00  5.00000000000e-01  5.00000000000e-01
 2  5.00000000000e-01  1.00000000000e+00  7.50000000000e-01  2.50000000000e-01
 3  5.00000000000e-01  7.50000000000e-01  6.25000000000e-01  1.25000000000e-01
 4  5.00000000000e-01  6.25000000000e-01  5.62500000000e-01  6.25000000000e-02
 5  5.62500000000e-01  6.25000000000e-01  5.93750000000e-01  3.12500000000e-02
 6  5.62500000000e-01  5.93750000000e-01  5.78125000000e-01  1.56250000000e-02
 7  5.62500000000e-01  5.78125000000e-01  5.70312500000e-01  7.81250000000e-03
 8  5.62500000000e-01  5.70312500000e-01  5.66406250000e-01  3.90625000000e-03
 9  5.66406250000e-01  5.70312500000e-01  5.68359375000e-01  1.95312500000e-03
10  5.66406250000e-01  5.68359375000e-01  5.67382812500e-01  9.76562500000e-04
11  5.66406250000e-01  5.67382812500e-01  5.66894531250e-01  4.88281250000e-04
12  5.66894531250e-01  5.67382812500e-01  5.67138671875e-01  2.44140625000e-04
13  5.67138671875e-01  5.67382812500e-01  5.67260742188e-01  1.22070312500e-04
14  5.67138671875e-01  5.67260742188e-01  5.67199707031e-01  6.10351562500e-05
15  5.67138671875e-01  5.67199707031e-01  5.67169189453e-01  3.05175781250e-05
16  5.67138671875e-01  5.67169189453e-01  5.67153930664e-01  1.52587890625e-05
17  5.67138671875e-01  5.67153930664e-01  5.67146301270e-01  7.62939453125e-06
18  5.67138671875e-01  5.67146301270e-01  5.67142486572e-01  3.81469726562e-06
19  5.67142486572e-01  5.67146301270e-01  5.67144393921e-01  1.90734863281e-06
20  5.67142486572e-01  5.67144393921e-01  5.67143440247e-01  9.53674316406e-07
21  5.67142486572e-01  5.67143440247e-01  5.67142963409e-01  4.76837158203e-07
22  5.67142963409e-01  5.67143440247e-01  5.67143201828e-01  2.38418579102e-07
23  5.67143201828e-01  5.67143440247e-01  5.67143321037e-01  1.19209289551e-07
24  5.67143201828e-01  5.67143321037e-01  5.67143261433e-01  5.96046447754e-08
25  5.67143261433e-01  5.67143321037e-01  5.67143291235e-01  2.98023223877e-08
26  5.67143261433e-01  5.67143291235e-01  5.67143276334e-01  1.49011611938e-08
27  5.67143276334e-01  5.67143291235e-01  5.67143283784e-01  7.45058059692e-09
root =  0.5671432837843895
Estimated Convergence Rate =  1.00

-------- Newton's Method ----------------------------

 0  6.83939720586e-01  3.16060279414e-01
 1  5.77454477154e-01  1.06485243431e-01
 2  5.67229737730e-01  1.02247394243e-02
 3  5.67143296530e-01  8.64411998212e-05
 4  5.67143290410e-01  6.12051198612e-09
root =  0.567143290409784
Estimated Convergence Rate =  2.00

-------- Secant Method ------------------------------

 0  6.83939730311e-01  3.16060269689e-01
 1  6.01537177763e-01  8.24025525482e-02
 2  5.70263765024e-01  3.12734127389e-02
 3  5.67230031394e-01  3.03373363031e-03
 4  5.67143511852e-01  8.65195418288e-05
 5  5.67143290426e-01  2.21426644798e-07
 6  5.67143290410e-01  1.57320823035e-11
root =  0.5671432904097838
Estimated Convergence Rate =  1.60

-------- Fixed-Point Method -------------------------

 0  3.78160239580e-01  6.21839760420e-01
 1  5.40303702823e-01  1.62143463243e-01
 2  5.66560547515e-01  2.62568446913e-02
 3  5.67143012343e-01  5.82464828031e-04
 4  5.67143290410e-01  2.78067183279e-07
 5  5.67143290410e-01  6.38378239159e-14
root =  0.5671432904097838
Estimated Convergence Rate =  2.00

-------- Steffensen's Method ------------------------

 0  5.06769180809e-01  4.93230819191e-01
 1  5.67290144851e-01  6.05209640421e-02
 2  5.67143290408e-01  1.46854443692e-04
 3  5.67143290410e-01  2.12452277992e-12
root =  0.5671432904097838
Estimated Convergence Rate =  3.00

Pros and Cons

  • 二分法最稳妥,但是效率最低。
  • 牛顿法效率最高,但是要求函数解析,容易计算导数。
  • 弦割法是前面两种方法的折衷,既有较高的效率,又不必像牛顿法那样必须计算函数 \(f\) 的导数。如果初始猜测比较接近待求解,则其收敛速度几乎与牛顿算法一样快。

如果待求解附近,函数的行为不好(如在\(x_0\)附近有拐点),则自动的牛顿法和弦割法都可能无法收敛或收敛到错误的结果。保险的做法是先用二分法初步的定出解的位置,再用两个自动方法中的一个定出解的精确位置

3.4 Multiple roots

如果我们要求解的方程具有多个实根,如 \[2x^3 -5x - 1= 0\]有3个实根。如何用数值方法求出所有3个根?

Code
x = np.linspace(-3, 3, 100)
ax = plt.axes()
ax.plot(x, 2.0 * x**3.0 - 5.0 * x - 1.0)
ax.set_xlabel(r"$x$")
ax.set_ylabel(r"$y$")
shiftax(ax)
plt.show()

逐步搜索法

\(f(x) = 0\)\([a, b]\)上有实根,选定步长\(h\),从\(a\)开始依次扫描计算\(x_k = a + hk, ~~ (k = 0, 1, 2, ...)\)处的函数值\(f(x_k)\);若\(f(x_k) = 0\) ,则\(x_k\)即为方程的一个实根.

  • 若某相邻两个点 \(f(x_k)f(x_{k+1}) < 0\)(异号),则说明在小区间内至少有一个实根;

  • 可取\(x_k\)\(x_{k+1}\)\(\frac{x_k + x_{k+1}}{2}\)为方程的根\(x^*\)的近似值.

  • 只要步长足够小,就能得到任意精度的近似根,但步长越小,计算量越大.通常只用来确定根的大致范围.

  • 👍 因此,在求解多根问题时,我们可以利用逐步搜索法找出所有根的存在区间,然后利用二分法、Newton-Raphson法或弦割法求解精确值。

Exercise 6 求解下述方程的所有实根

\[2x^3 - 5x - 1 = 0\]

Code
x = np.linspace(-3, 3, 100)
ax = plt.axes()
ax.plot(x, 2.0 * x**3.0 - 5.0 * x - 1.0)
ax.set_xlabel(r"$x$")
ax.set_ylabel(r"$y$")
shiftax(ax)
plt.show()

Solution 6.

%filename: chap1_example_6_several.m
%利用逐步搜索法结合二分法求解多根问题;
function Program1
%clc;clear;
close all
clear all
clc

x1=[-3:0.001:3];
f=@(x) 2*x.^3-5*x-1;
plot(x1,f(x1),'r',x1,0,'b')
hold on

a=-100;             %左边界
b=100;              %右边界
delta=10^-5;        %误差标准
h=0.1;                %逐步搜索法的步长,仅用于确定有根区间
Nx=0;               %根的个数未知
n=0;
while a<b
    while f(a)*f(a+h)>0   %确定一个有根区间
        n=n+1;
        a=a+h;
        if a>b
           break;
        end
    end
    %
    if a>b
        break;
    end
    Nx=Nx+1;                    %根的个数加1
    x(Nx)=root(f,a,a+h,delta);  %利用二分法求得此区间上的根
    a=x(Nx)+h;                  %进行下一个有根区间的搜索
end
for i=1:Nx
    fprintf('方程第%d个根:%.6f\n',i,x(i));
end
plot(x,0,'o')
hold off
n

function r=root(f,a,b,delta) %利用二分法求根的子函数
while(abs(b-a)>delta)
    r=(a+b)/2;
    if f(a)*f(r)>0
        a=r;
    else
        b=r;
    end
end
r=(a+b)/2;

3.5 Matlab commands for root-finding

3.5.1 fzero

fzero:求单变量函数的零点,使用zeroin算法(结合了二分法、弦割法以及其它方法的一种综合方法)。

x=fzero(函数句柄,猜测的初始值或搜寻的区间)

f = @(x)sin(x); fzero(f,[3,4])

3.5.2 roots

roots(c)指令可以求出多项式的全部零点,其中c为各幂次项的系数。

3.5.3 fsolve

fsolve 指令可以解出多变量的非线性方程组:

  • x=fsolve(fun, x0)
  • [x, fval]=fsolve(fun, x0)

其中,x为方程的零点,x0为猜测的初始值,fun为要求解的非线性方程组,fval为零点位置对应的函数值

Exercise 7 \[ \begin{align} &2\cos x + \sqrt{y} - \ln z = 7 \\ &2^x + 2y - 8z = -1 \\ &x + y - \cosh z = 0 \end{align} \]

Solution 7.

首先要编写一个函数文件,如fun.m,将方程组中的\(x, y, z\)看作一个矢量X的三个分量,函数值\(Y\)矢量中各个分量代表的含义为

\[\begin{align} &Y(1) = 2\cos x + \sqrt{y} - \ln z - 7 \\ &Y(2) = 2^x + 2y - 8z + 1 \\ &Y(3) = x + y - \cosh z \end{align}\]

写一个程序,给定零点猜测值,使用fsolve命令调用上述函数

%filename: chap1_example_9_Matlab.m
clear all
clc
x0=[0.5+5i,0.5+5i,0.5+5i];
[x1,y1]=fsolve('fun',x0)
%也可使用x1=fsolve('fun',x0)
%方程组定义在fun.m文件中




4 Seminar 1: semiclassical quantization in molecule viberation

function semiclass
clear
clc
close all
format long

tic
EE=zeros(1,11);
%for gamma=20:10:200;   %对于特定分子的gamma值,如H2分子的gamma值为21.7,O2分子的gamma值为150
gamma=21.7;
n=5;        %振动能级数目;
e0= -0.999;   %简单搜索法启动点,LJ势函数(约化后)的最小值为-1
h1=0.0001;
for k=1:n
    E(k)=qiugen(@(e)vh(e, k, gamma), e0, h1)
    %qiugen函数在下面子程序中定义,利用步长0.0001可以得到H2的6个束缚态,如果步长为0.001,则第六个束缚态无法求出,程序一直运行。     
    e0=E(k)+0.01*abs(E(k)); %改变初始猜测值(启动点),用于求下一个振动能级    
    h1=0.01*abs(E(k))
    [temp, jie(k,1), jie(k,2)] = vh(E(k),k, gamma);  
    %获得r_in与r_out用于画图  
end
E2=E*4.747 %4.747eV为实验观测的位势深度 
toc

figure;subplot(2,1,1);
for k=1:n
    fplot(@(x)E2(k), [jie(k,1),jie(k,2)]);   %画振动能级
    hold all
end
fplot(@(x)4*(x.^(-12)-x.^(-6))*4.747, [1 2]);     %画势能函数曲线
hold off


subplot(2,1,2)
for k=1:n
    z=@(x)gamma*sqrt(abs(E(k)-4*(x.^(-12)-x.^(-6)))); 
    %约化后的动量p,画相空间轨迹
    fplot(@(x)[-z(x), z(x)], [jie(k,1) jie(k,2)]);
    hold on
end

%axis([1 2 -100 100]);
function [f, r1,r2]=vh(e, n, gamma)

ve=@(x) e-4*(x.^(-12)-x.^(-6));      
%被积函数的平方,见教材(1.22)式
r1=qiugen(ve, 0.9, 0.01)   
%求r_in和r_out,0.9为简单搜索法启动点,0.001为步长
r2 = qiugen(ve, 1.125, 0.01);
%1.125启动点在势能面最低点处的自变量值2^(1/6)=1.1225附近
vee=@(x)(e-4*(x.^(-12)-x.^(-6))).^(1/2); %被积函数,见(1.22)式
f=(n-0.5)*pi-gamma*jifen(vee, r1, r2);   
%由于物理实际中n从0开始取值,此处从1开始取值,故为n-0.5

function jf=jifen(f, d, u, n) 
%f为被积函数,d,u分别为积分上下限,n为积分区间数
if nargin < 4     %nargin是用来判断输入变量个数的函数 
    n = 10000;     %积分小区间数目
end
h = (u-d)/n;      %积分步长
t = d:h:u;   
ft = f(t);      
jf = (4*sum(ft(2:2:n))+2*sum(ft(3:2:n-1))+ft(1)+ft(n+1))*h/3; 
%复化simpson积分公式


function gen=qiugen(f, x0, h, delta) 
%求根函数,求解x_in和x_out两个点
if nargin < 4        %nargin是用来判断输入变量个数的函数 
    delta = 10^-8; %误差标准
end
while h > delta          
    %简单搜索法求根,可改成二分法,newton-raphson法等
    if f(x0+h)*f(x0)>0
        x0=x0+h;
    else
        h=h/2;
    end
end
%  while h > delta          %简单搜索结合二分法
%     if f(x0+h)*f(x0)>0
%          x0=x0+h;
%     else
%          a=x0;
%          b=x0+h;
%          while abs(a-b)>delta
%             x0=(a+b)/2;     
%             if f(x0)*f(a)<0
%                 b=x0;
%             else
%                 a=x0;
%             end
%          end
%          x0=(a+b)/2;    
%          h=b-a;
%     end 
%  end
gen=x0;
%画约化后的LJ函数
close all
clear all
clc
x=0.9:0.01:3;
v0=1;
a=1;
y=4*v0*(a^12./(x.^12)-a^6./(x.^6));
x1=0.8:0.001:3;
hold all
plot(x,y,'b',x1,0,'r',1,0,'ko') %势能零点在x=1处
clear all
close all
clc
r=0:0.01:6;
v0=4.747;
beta=1;
r_min=0.74166;
v=v0*((1-exp(-beta*(r-r_min))).^2-1);
plot(r,v)

5 Homework

  1. 用数值法求解定积分 \(\int_0^1 t^{-2/3}(1-t)^{-1/3}dt\),其精确解为 \(2\pi/\sqrt{3}\)。 要求:分别用梯形公式、 Simpson公式求定积分的值,并考察不同 \(h\)值对结果的精度影响。 Hints: 把积分区域分为两部分,在每个积分中作不同的变量替换以处理奇点。
  2. 分别用二分法、牛顿迭代法,弦割法以及Matlab自带指令求下面方程 \[f(x) = (x-1)[\sin(x-1)+3x] - x^3 + 1 = 0\]在0.95附近的根.
  3. 针对“振动的半经典量子化”课题,完成以下作业:
    1. 阅读并理解程序
    2. (P12,习题1.11)运行程序,得到能谱,并画出最低能级\(\varepsilon_1\)\(\gamma\)的关系(\(\gamma\)值从20运行到200),并解释得到的结果。
    3. (P12,习题1.7)修改程序,改用谐振子势计算能谱,并与解析结果比较。
    4. (P13,习题1.12)修改程序,改用Morse势计算能谱,比较并解释与LJ能量值的差异。

要求程序有对应注释,并配有对结果较详细的Word文档解释