Please enable JavaScript.
Coggle requires JavaScript to display documents.
Numerical Analysis II, Cubic Polynomial \(s_i(x) = a_i + b_i(x-x_i) + c…
Numerical Analysis II
Numerical Differentiation
(Interpolation)
Polynomial Interpolation
\(V(x) = \sum_{j=0}^n C_j \phi_j(x)\)
Given data pairs \((x_i, y_i)\), the interpolation function \(V(x)\) satisfies \(V(x_i) = y_i , \forall_i\)
(x_i are called
abscissae
)
Piece-wise Polynomial Interpolation
Cut polynomial \([a, b]\) into 'r' pieces
\(a = t_0 < t_1 < t_2 < ... < t_r = b\)
\(V(x)\)
\(= s_0(x)\) if \(t_0 \leq x \lt t_1\)
\(= s_1(x)\) if \(t_1 \leq x \lt t_2\)
...
\(= s_{r-1}(x)\) if \(t_{r-1} \leq x \leq t_r\)
More resilient to error (such as Runge's phenom)
Higher-order polynomial tend to be oscillatory
Small local change in polynomial interpolation -> Large global error
\(C^n\) denotes that (n+1)th derivative of \(s_i\) is discontinuous
Linear
(\(C^0\))
\(s_i(x) = y_i + f[x_i, x_{i+1}](x-x_i) = y_i + \frac{y_{i+1}-y_i}{x_{i+1} - x_i}(x-x_i)\)
Maximum&minimum is given by the data
Error only depends on spacing and \(f''(x)\)
Poor approximation when \(f''(x)\) is large
Piece-wise Cubic Hermite Interpolating Polynomial
(PCHIP) (\(C^1\))
\(2n\) coefficients can be found:
\(s_i'(x_i) = y_i'\)
\(s_i'(x_{i+1}) = y_{i+1}\)
Accurate
100% local
Require to know \(f'(x)\)
#
Formula
Cubic spline
\(C^2\)
\(2n-2\) coefficients can be found:
\(s'_i(x_{i+1}) = s'_{i+1}(x_{i+1})\)
\(s''_{i}(x_{i+1}) = s''_{i+1}(x_{i+1})\)
#
2 Boundary Conditions
Natural
\(s''_0(x_0) = 0\)
\(s''_{n-1}(x_{i+1}) = s''_{i+1}(x_{i+1})\)
Large curvature at the edge can lead to large error
Clamped
\(s'_0(x_0) = f'(x_0)\)
\(s'_{n-1}(x_n) = f'(x_n)\)
Need to know \(f'(x)\)
Not-a-knot
\(s'''_0(x_1) = s'''_1(x_1)\)
\(s'''_{n-2}(x_{n-1}) = s'''_{n-1}(x_{n-1})\)
Formula
There are simpler formulas to find \(a_i, b_i, c_i, d_i\)
Basis Function \(\phi_j(x)\)
Monomial
\(\phi_n(x) = x^n\)
\(\frac{2}{3}n^3\) to solve the matrix
Large n -> Vandermonde matrix -> Ill-conditioned
Not adaptive to new data
Lagrange
\(\phi_n(x) = L_j(x) = \frac{\Pi_{i=0,i \neq j}^n(x-x_i)}{\Pi_{i=0,i \neq j}^n(x_j - x_i)}\)
\(\phi_n(x) = (1\) if \(i = j, 0\) if \(i \neq j)\)
So, \(y_j = C_j\), then
\(P_n(x) = \sum_{j=0}^n y_j L_j(x)\)
No need to solve the matrix
Polynomial is more expensive
Not adaptive to new data
Newtons
\(\phi_n(x) = \Pi_{i=0}^{n-1}(x-x_i)\)
Relatively efficient polynomial
Desired in error analysis
Adaptive to new data
\(P_{n+1} = P_n(x) + f[x_0, x_1, ..., x_n, x_{n+1}] \Pi_{i=0}^n\)
Divided Difference (getting \(C_j\))
Top entry of each result column is C_j
Applications
Estimating / Predicting values of the function at other points
Replace the function with a simpler approximation
Produce smooth curve through discrete data
Error Analysis
Result of M.V.T.
\(f[x_0, x_1]\) is slope, then by M.V.T, \(\exists_\xi \in [x_0, x_1] s.t. f'(\xi) = f[x_0, x_1]\)
Generalization: \(\frac{f^{(k)}(\xi)}{k!} = f[z_0, z_1, ..., z_k]\)
\(z_0, z_1, ..., z_k, \xi \in [a, b]\)
Error Bounds
Since we do not know the value of \(\xi\), we only can find the bound
\(|f(x) - P_n(x)| = f[x_0, x_1, ..., x_n, x]\Pi_{i=0}^n(x-x_i) = \frac{f^{(n+1)}(\xi)}{(n+1)!}\Pi_{i=0}^n(x-x_i)\)
So, \(Max|f(x) - P_n(x)| = (\frac{1}{(n+1)!})Max( f^{(n+1)}(t)) Max(\Psi_n(s))\) where \(x, s, t, \in [a, b]\) and \(\Psi_n(s) = \Pi_{i=0}^n|s-x_i|\)
Error at \(x^*\)
\(|f(x) - P_n(x)| \leq \frac{1}{(n+1)!} M \Psi_n(x^*)\)
To decrease error, M has to be small, \(x^*\) is close to any of \(x_i\), and n is large.
#
Runge's Phenomenon
The error tends to be large at the end of the interval
Usually, we assume \(f^{(n)}\) to grow slower, but in some functions, this is not the case.
Chebyshev Points
x_i are distributed clustered toward the edges
Chebyshev Min-Max Property
Let \(\tilde{T}_n(x) = \Pi_{k=1}^n(x - x_k)\)
then \({Max}_{-1 \leq x \leq 1} |\tilde{T}_n(x)| = \frac{1}{2^{n-1}}\)
so \({Max}_{a \leq x \leq b} \Psi_n(x) = {Max}_{a \leq x \leq b} \tilde{T}_{n+1}(x) = \frac{1}{2^n}\)
so virtually \(2^n\) grows faster than \(f^{(n)}\) of any function
Hermite Interpolation
\(f\) and \(m\) of its derivatives agree at \(x_i\)
\((n+1)(m+1)-1\) unique polynomials. (\(2n+1\) for \(m=1\))
Data points: \((x_0, y_0), ..., (x_n, y_n), (x_{n+1}, y'_0), ..., (x_{2n+1}, y'_n)\)
Basis Function \(h_{2n+1}(x)\)
m = 1
Monomial
\(h_{2n+1}(x) = x^n\)
Lagrange
\(h_{2n+1}(x) = \sum_{i=0}^ny_iH_i(x) + \sum_{i=0}^ny'_i\hat{H_i}(x)\)
\(H_i(x) = [1-2(x-x_i)L_i(x_i)]L_i^2(x)\)
\(\hat{H_i}(x) = (x-x_i)L_i^2(x)\)
Newton
Let \(x_i = z_{2i}, z_{2i+1}, \phi_i = \Pi_{k=0}^{j-1}(x-z_k)\)
Divided Difference
Follows the exact same format as basic one, but
will be replaced with \(y_i'\)
Richardson's Extrapolation
Fourier Series
of \([-\pi, \pi]\)
\({ \phi }_{k=0}^{2l}\)
trigonometric basis functions:
\(\phi_0=1\)
\(\phi_{2k-1}=sin(kx)\)
\(\phi_{2k}=cos(kx)\)
we have
Fourier Series
:
\(f(x) = \frac{a_o}{2} + \sum_{k=1}^\infty [a_k cos(kx) + b_k sin(kx)]\)
and
Fourier Transform
:
the coefficients \(a_0, a_k,\) and \(b_k\)
Good approximation for even discontinuous functions
Basis are orthogonal on \([-\pi, \pi]\)
Provide a way to represent a function in terms of
frequency
rather than spatial or temporal components.
Continuous
degree \(l\) (cut off point)
The best approximation is given:
\(v(x) = \frac{a_0}{2} + \sum_{k=1}^{l-1} [a_k cos(kx) + b_k sin(kx)] + a_lcos(lx)\)
\(a_k = \frac{1}{\pi} \int_{-\pi}^{\pi}f(x)cos(kx)dx, k = 0,1, ..., l\)
\(b_k = \frac{1}{\pi} \int_{-\pi}^{\pi}f(x)sin(kx)dx, k = 1, 2, ..., l-1\)
\(a_l cos(kx)\) is optional, but gives better agreement with discrete problems
Orthogonality \(\int_{-\pi}^{\pi} \phi_i(x) \phi_j(x) dx = 0, i \neq j \)
Discrete
degree \(l\), where \(x_i = \frac{2\pi i}{m}2l < m\)
\(p(x)= \frac{a_0}{2} + \sum_{k=1}^{l-1} [a_k cos(kx) + b_k sin(kx)] + \frac{a_l}{2}cos(lx)\)
\(a_k = \frac{2}{m} \sum_{i=0}^{m-1} y_i cos(kx_i), k = 0, 1, ..., l\)
\(b_k = \frac{2}{m} \sum_{i=0}^{m-1} y_i sin(kx_i), k = 1, 2, ..., l-1\)
Inverse DFT
\(y_i = p(x_i) = \frac{a_0}{2} + \sum_{k=1}^{l-1} [a_k cos(kx_i) + b_k sin(kx_i)] + \frac{a_l}{2}cos(lx_i)\)
One can take the transform of a given function or data in the space of x,
manipulate the coefficients
meaningfully in frequency space, and then transform back to the original space
Function Norm [a, b]
1-Norm
\(||g||_1 = \int_a^b|g(x)| dx\)
2-Norm
\(||g||_2 = (\int_a^b g(x)^2 dx)^\frac{1}{2}\)|
Inf-Norm
\(||g||_\infty = {max}_{a \leq x \leq b} |g(x)|\)
Best Least-Square Approximation
Minimizer
Find \(v(x)\) closest to \(f(x)\) in L2-norm
\({min} \frac{1}{2} || f(x) - v(x) ||_2^2\)
\(= {min} \frac{1}{2} \int_a^b (f(x) - v(x))^2 dx\)
\(= {min}_{\overrightarrow{c}} \frac{1}{2} \int_a^b (f(x) - \sum_{j=0}^nc_j\phi_j(x))^2 dx\)
\(\frac{\partial}{\partial c_k} \psi(\overrightarrow{c})\) Leibniz integral rule
\(=\int_a^b(f(x) - \sum_{j=0}^nc_j\phi_j(x))(-\phi_k(x)) dx\)
\(=\int_a^b[\sum_{j=0}^nc_j\phi_j(x)\phi_k(x) - f(x)\phi_k(x)] dx\)
And we want \(\frac{\partial}{\partial c_k} \psi(\overrightarrow{c}) = 0, \forall_{k \in \mathbb{R}^{n + 1}}\)
=>
\(\sum_{j=0}^nc_j \int_a^b\phi_j(x)\phi_k(x) dx = \int_a^b f(x)\phi_k(x) dx\)
\(\sum_{j=0}^n \lt \phi_j, \phi_k \gt c_j = \lt f, \phi_k \gt \)
Basis
(\(\phi_j\))
Monomial
\(\phi_j(x) = x^j, j \in {0, 1, 2, ..., n}\)
\(\tilde{B}\) become poorly conditioned as n becomes large.
Not adaptive as n increases (have to calculated \(\tilde{B}\) again)
Orthogonal
satisfies \(\lt \phi_j, \phi_k \gt = 0\)
No need to solve the system
Numerically stable
Adaptive to change of n
Legendre Polynomial
We may derive orthogonal basis based on the monomial basis going through Gram-Schmidt process. This produces Legendre Polynomial in [-1, 1]
Scaling/Shifting
\(t = \frac{b-a}{2}x + \frac{a+b}{2}\)
\(x = \frac{2}{b-a} [t - \frac{a+b}{2}]\)
\(\phi_j(x) = \phi_j(x(t))\)
=>
\(\phi_j(x) = \tilde{\phi_j}(t)\)
Let \(x = \frac{2}{b-a} [t - \frac{a+b}{2}] = \frac{2t-a-b}{b-a}\)
\(dx = \frac{2}{b-a} dt\)
\(\frac{b-a}{2} dx = dt\)
\(d_j = \lt \tilde{\phi_j}, \tilde{\phi_j} \gt \)
\(= \int_a^b [\tilde{\phi_j}](t)^2 dt\)
\(= \frac{b-a}{2} \int_{-1}^1 \phi_j^2(x) dx\)
\(= \frac{b-a}{2} \frac{2}{2j+1}\)
\(= \frac{b-a}{2j+1}\)
ex) \(f(x)=cos(\frac{x}{2}) \in [-\pi, \pi]\)
\(t = \pi x, x = \frac{t}{\pi}\)
\(\phi_0 = 1\)
\(\phi_1 = \frac{x}{\pi}\)
\(\phi_2 = \frac{1}{2}(3\frac{x^2}{\pi^2} - 1)\)
...
\(\tilde{b}_0 = \int_{-\pi}^{\pi} 1*cos(\frac{x}{2}) dx = 4\)
\(\tilde{b}_1 = \int_{-\pi}^{\pi} \frac{x}{\pi}*cos(\frac{x}{2}) dx = 0\)
\(\tilde{b}_2 = \frac{3}{2\pi^2}\int_{-\pi}^{\pi}x^2cos(\frac{x}{2}) - \frac{1}{2}\int_{-\pi}^{\pi}cos(\frac{x}{2}) dx\)
...
Numerical Integration
(Quadrature)
General Form:
\(I_f = \int_a^b f(x) dx = \sum_{i=0}^n w_i f(x_i)\)
\(w_i\) is weight
\(x_i\) is abscissae
Basic Quadratures
Range \([a, b]\)
Midpoint
(Constant)
\(x_0 = \frac{a+b}{2} \)
\(I_{mid} = \int_a^b f(\frac{a+b}{2}) dx = (b-a)\frac{a+b}{2}\)
Simpson's
(Quadratic)
\(x_0 = 1, x_1 = \frac{a+b}{2}, x_2 = b,\) and \(L_n(x)\)
or \(x_0 = a, x_1 = a+h, x_2 = a+2h, h=\frac{b-a}{2}\)
\(f(x) = f(a)L_0(x) + f(\frac{a+b}{2})L_1(x) + f(b)L_2(x)\)
\(I_f = \int_a^b f(x) dx\)
\(\approx f(a)\int_a^b \frac{(x-a-h)(x-b)}{-h*-2h} + f(\frac{a+b}{2})\int_a^b \frac{(x-a)(x-b)}{-h^2} + f(b)\int_a^b \frac{(x-a)(x-a-h)}{2h^2}\)
\(I_{simp} = \frac{h}{3}[f(a) + 4f(\frac{a+b}{2}) + f(b)]\)
Trapezoid
(Linear)
\(x_0 = a, x_1 = b\), and \(L_n(x)\) (Lagrange)
#
\(f(x) = f(a)L_0(x) + f(b)L_1(x)\)
\(I_f = \int_a^b f(x) dx \approx f(a) \int_a^b \frac{x-b}{a-b}dx + f(b)\int_a^b \frac{x-a}{b-a} dx\)
\(I_{trap} = \frac{b-a}{2} [f(a) + f(b)]\)
Error Analysis
\(h = b-a\)
Relationship to interpolation
\(\int f(x) - P_n(x) dx = \int f[x_0, x_1, ..., x_n, x] \Pi_{i=0}^n(x-x_i) dx\)
#
So, \(I_f - \int_a^b P_n(x) dx = \int_a^b f[x_0, x_1, ..., x_n, x] \Pi_{i=0}^n(x-x_i) dx\)
Trapezoid
\(E_{trap}(f) = \int_a^b f[a,b,x] (x-a)(x-b) dx\)
\(= f[a,b,\xi] \int_a^b (x-a)(x-b) dx\)
\(=f[a,b,\xi] \frac{-h^3}{6}\)
\(=\frac{f''(\eta)}{2} \frac{-h^3}{6}, \eta \in (a, b)\)
\(O(h^3)\)
if \(f''(x) > 0\), overestimated
M.V.T for integrals
\(\int_a^b f(x) dx = f(\xi)(b-a), \xi \in (a, b)\)
Weighted M.V.T
\(\int_a^b f(x)g(x) dx = f(\xi) \int_a^b g(x) dx\)
if \(g(x)\) doesn't change sign
Midpoint
\(E_{mid} = \frac{f''(\xi)}{24}(b-a)^3\)
\(O(h^3)\)
Use Taylor approximation for f(x)
if \(f''(x) > 0\), underestimated
Simpson's
\(E_{simp}(f) = \frac{-f^{(4)}}{2880}(\eta)(b-a)^5\)
\(O(h^5)\)
Composite Quadratures
Range \([a, b]\)
Romberg's Method/Richardson's Extrapolation
#
Inner Product <., .>
\(\lt f, g \gt = \int_a^b f(x)g(x) dx\)
\(\lt g, g \gt = \int_a^b g(x)^2 dx = ||g||_2^2\)
#
Cubic Polynomial
\(s_i(x) = a_i + b_i(x-x_i) + c_i(x-x_i)^2 + d_i(x-x_i)^3\)
this produces \(4n\) unknown coefficients
First \(2n\) coefficients can be found:
\(s_i(x_i) = y_i\)
\(s_i(x_{i+1}) = y_{i+1}\)