Differential equations 2 lectures

 

1Second-order linear boundary value problems

We define a differential operator \(\mathcal{L}\) by

\[\mathcal{L}y (x)=P_2 (x) y'' (x) + P_1 (x) y' (x) + P_0 (x) y (x)\]

where \('\) denotes \(\frac{\mathrm{d}}{\mathrm{d}x}\), and \(P_2 (x) \neq 0\).

We wish to solve

\(\displaystyle \mathcal{L}y (x)=f (x)\)(N)

for a given forcing function \(f\) on \(a < x < b\), with boundary conditions

\(\displaystyle B_1 (y)=0, B_2 (y)=0\)(BC)

where \(B_1, B_2\) are boundary operators (linear operators of \(y\) and \(y'\) at the boundaries)

\[\begin{array}{l}B_1 y=α_{11}y (a) + α_{12}y' (a) + β_{11}y (b) + β_{12}y' (b)\\ B_2 y=α_{21}y (a) + α_{22}y' (a) + β_{21}y (b) + β_{22}y' (b)\end{array}\]

Remark 1. If \(f \neq 0\), the problem is inhomogeneous.

The associated homogeneous problem is

\(\displaystyle \mathcal{L}y (x)=0\)(H)

We'll also sometimes have inhomogeneous boundary conditions \(B_1 y=γ_1, B_2 y=γ_2\).

\(\mathcal{L}\) is linear, in the sense that \(\mathcal{L}(α_1 y_1 + α_2 y_2)=α_1 \mathcal{L}y_1 + α_2\mathcal{L}y_2\).

A general solution to (N) is \(y (x)=y_{\text{P}}(x) + y_{\text{C}}(x)\) where \(y_{\text{P}}\) (particular integral) is any solution of (N) and \(y_{\text{C}}\) (complementary function) is the general solution of (H).

If boundary conditions depend only on \(y (a), y' (a)\), this is an initial value problem (IVP). This has unique solution (Picard's theorem) at least locally to \(x=a\), provided \(f\) is reasonably well-be


1Methods for solving the homogeneous problem

1.1Constant coefficients

Try \(y (x) = e^{Mx}⇒M\) satisfies \(P_2 M^2 + P_1 M + P_0 = 0\) (if \(M\) repeated, \(y = xe^{Mx}\))

1.2Cauchy-Euler equation

\[αx^2 \frac{\mathrm{d}^2 y}{\mathrm{d} x^2} + \beta x \frac{\mathrm{d}y}{\mathrm{d} x} + γy = 0\]

Try \(y (x) = x^m⇒m\) satisfies \(αm (m - 1) + \beta m + γ= 0\) (if \(m\) is repeated, \(y (x) = x^m \ln x\))

1.3Reduction of order

If we can find one solution \(y_1 (x)\) we can find another by writing it as \(y_2 (x) = y_1 (x) v (x)\)

Substituting into equation \(ℒy_2 = 0\) and simplify using the fact that \(y_1\) is a solution,

\[P_2 y_1 v^{\prime \prime} + (2 P_2 y_1' + P_1 y_1) v' = 0\]

which is a separable first-order ODE for \(v'\). One further integration then gives \(v\) and thus the general solution \(y (x) = v (x) y_1 (x)\).

1.4Variation of parameters

To solve \(ℒy = f\) for a general function \(f\), suppose \(y (x) = c_1 y_1 (x) + c_2 y_2 (x)\) with linearly independent \(y_1, y_2\) is the general solution of the homogeneous problem.

We seek a solution as \(y (x) = c_1 (x) y_1 (x) + c_2 (x) y_2 (x)\) and suppose that

\(\displaystyle c_1' y_1 + c_2' y_2 = 0\)(1)

Then \(y' = \cancel{c_1' y_1} + c_1 y_1' + \cancel{c_2' y_2} + c_2 y_2'\)

\(y'' = c_1' y_1' + c_1 y_1'' + c_2' y_2' + c_2 y_2''\)

So

\[ℒy = P_2 (c_1' y_1' + c_2' y_2') + c_1 (\cancel{P_2 y_1'' + P_1 y_1' + P_0 y_1}) + c_2 (\cancel{P_2 y_2'' + P_1 y_2' + P_0 y_2}) = f\]

So

\(\displaystyle c_1' y_1' + c_2' y_2' = \frac{f}{P_2}\)(2)

(1)&(2) together are

\[\left( \begin{array}{ll} y_1 & y_2\\ y_1' & y_2' \end{array} \right) \left( \begin{array}{c} c_1'\\ c_2' \end{array} \right) = \left( \begin{array}{c} 0\\ f / P_2 \end{array} \right)\]

Invert to find

\[\left( \begin{array}{c} c_1'\\ c_2' \end{array} \right) = \frac{1}{W} \left( \begin{array}{cc} y_2' & - y_2\\ - y_1' & y_1 \end{array} \right) \left( \begin{array}{c} 0\\ f / P_2 \end{array} \right) = \frac{f}{P_2 W} \left( \begin{array}{c} - y_2\\ y_1 \end{array} \right)\]

Integrating

\(\displaystyle c_1 (x) = - \int^x \frac{f (ξ) y_2 (ξ)}{P_2 (ξ) W (ξ)} \text{d} ξ, \quad c_2 (x) = \int^x \frac{f (ξ) y_1 (ξ)}{P_2 (ξ) W (ξ)} \text{d} ξ\)(3)

Then \(y (x) = c_1 (x) y_1 (x) + c_2 (x) y_2 (x)\) is a particular solution to (N).

Example. \(y'' + y = \tan x\)

First note that two homogeneous solutions for \(y'' + y = 0\) are \(y_1 (x) = \cos x, y_2 (x) = \sin (x)\)

Wronskian \(W (x) = y_1 y_2' - y_2 y_1' = \cos^2 x + \sin^2 x = 1 \neq 0\)

From (3)\begin{align*} c_1 (x) & = - \int \tan x \sin x \mathrm{d} x = \sin (x) - \log (\sec x + \tan x)\\ c_2 (x) & = \int \tan x \cos x \mathrm{d} x = - \cos x \end{align*}

Thus a particular integral

\[y (x) = c_1 (x) y_1 (x) + c_2 (x) y_2 (x) = - \cos (x) \log (\sec x + \tan x)\]

2Incorporating boundary conditions

We can try to incorporate homogeneous boundary conditions directly into this method, by choosing \(y_1 (x)\) and \(y_2 (x)\) to satisfy one each of the boundary conditions.

\(ℒy = f (x)\) on \(a<x<b, \quad y (a) = 0, y (b) = 0\)

Let \(y_1 (x)\) satisfy \(ℒy_1 = 0\) with \(y_1 (a) = 0\) and \(y_2 (x)\) satisfy \(ℒy_2 = 0\) with \(y_2 (b) = 0\)

Then for \(y (x) = c_1 (x) y_1 (x) + c_2 (x) y_2 (x)\) to satisfy the boundary conditions, we need \(c_2 (a) = 0\) and \(c_1 (b) = 0\), this can be achieved by setting the limits of integration in (3),

\[c_1 (x) = \int_x^b \frac{f (ξ) y_2 (ξ)}{P_2 (ξ) W (ξ)} \text{d} ξ, \quad c_2 (x) = \int^x_a \frac{f (ξ) y_1 (ξ)}{P_2 (ξ) W (ξ)} \text{d} ξ\]

The solution is then

\[y (x) = \int_x^b \frac{f (ξ) y_2 (ξ) y_1 (x)}{P_2 (ξ) W (ξ)} \text{d} ξ + \int^x_a \frac{f (ξ) y_1 (ξ) y_2 (x)}{P_2 (ξ) W (ξ)} \text{d} ξ\]

Write as a single integral

\[y (x) = \int_a^b g (x, ξ) f (ξ) \mathrm{d} ξ\]

where $g (x, \xi) = \begin{cases}\frac{y_1 (\xi) y_2 (x)}{P_2 (\xi) W (\xi)} & a < \xi < x < b\\ \frac{y_1 (x) y_2 (\xi)}{P_2 (\xi) W (\xi)} & a < x < \xi < b\end{cases}$ is called Green's function.

Example. Find a formula for the solution to

\[y'' + y = f (x), \quad 0<x<\fracπ2\quad y (0) = 0, y \left( \fracπ2\right) = 0\]

As linearly independent solutions of \(y'' + y = 0\), take

\[y_1 (x) = \sin x, \quad y_2 (x) = \cos x\]

to satisfy the boundary conditions.

The formula is \(y (x) = \int_0^{\fracπ{2}} g (x, ξ) f (ξ) \mathrm{d} ξ\) [\(W = - \sin^2 x - \cos^2 x = - 1\)]

where \(g (x, ξ) = \left\{ \begin{array}{ll} - \sin ξ \cos x & ξ < x\\ - \sin x \cos ξ & x < ξ \end{array} \right.\)

It is not always possible to find \(y_1, y_2\) to satisfy each of the boundary conditions:

Example.(Nonexistence/nonuniqueness of solution)If we try the same example but replace \(y \left( \fracπ2\right) = 0\) with \(y' \left( \fracπ2\right) = 0\), we run into a problem.

\(y_1 (x) = \sin x\) automatically satisfies both boundary conditions.

The method just described fails.

Variation of parameters still gives a particular integral \(y_{\text{P}} (x) = c_1 (x) \sin x + c_2 (x) \cos x\) from (3)

\[c_1 (x) = - \int_x^{\fracπ{2}} \cos ξ f (ξ) \mathrm{d} ξ \quad c_2 (x) = \int_0^x \sin ξ f (ξ) \mathrm{d} ξ\]

(Choosing to make \(c_2 (0) = c_1 \left( \fracπ2\right) = 0\))

The general solution is \(y (x) = c_1 (x) \sin x + c_2 (x) \cos x + A \sin x + B \cos x\)

Boundary conditions \(\begin{array}{l} y (0) = 0⇒0 = B\\ y' \left( \fracπ2\right) = 0⇒0 = - c_2 \left( \fracπ2\right) - B \end{array} ⇒\)need \(c_2 \left( \fracπ2\right) = 0 ⇒\)need

\[\int_0^{\fracπ{2}} \sin ξ f (ξ) \mathrm{d} ξ = 0\]

This is a solvability condition.


2The Fredholm Alternative

We have seen that \(ℒy = f\) & (BC) may or may not have a unique solution.

2.2A linear algebra analogy

Consider solving the problems

\(\displaystyle A\boldsymbol{x}=\boldsymbol{0}\)(H)

and

\(\displaystyle A\boldsymbol{x}=\boldsymbol{b}\)(N)

where \(A\) is an \(n \times n\) matrix.

  • If \(A\) is invertible, then (H) has only the trivial solution \(\boldsymbol{x}=\boldsymbol{0}\), and (N) has unique solution \(\boldsymbol{x}= A^{- 1} \boldsymbol{b}\).

  • If (H) has non-trivial solution \(\boldsymbol{x}=\boldsymbol{x}_1 \neq 0\), then \(A\) is not invertible (singular, \(\det A = 0\))

    A solution for (N) will exist if and only if \(\boldsymbol{b} \in \operatorname{im}A\) (column space of \(A\))

    If it does exist, the solution is not unique since we can add a multiple of \(\boldsymbol{x}_1\)

  • To decide whether \(\boldsymbol{b}\) is in the image of \(A\), consider the adjoint transformation \(A^*\) (the transpose matrix), which satisfies the inner product relation

    \[⟨A\boldsymbol{x}, \boldsymbol{w}⟩ = ⟨\boldsymbol{x}, A^*\boldsymbol{w}⟩ \text{ for all } \boldsymbol{x}, \boldsymbol{w}\]

    Solvability condition: If \(\boldsymbol{b}= A\boldsymbol{x}\), then \(⟨\boldsymbol{b}, \boldsymbol{w}⟩ = ⟨\boldsymbol{x}, A^*\boldsymbol{w}⟩ = 0\) for any \(\boldsymbol{w}\) satisfying \(A^*\boldsymbol{w}=\boldsymbol{0}\).

    This is a sufficient and necessary condition for \(\boldsymbol{b}\) to be in the image of \(A\) [\(\operatorname{Im}A = \ker (A^*)^{\bot}\)].

Theorem. (Fredholm alternative for \(\mathbb{R}^n\))One of the following must be true

  1. The homogeneous equation \(A\boldsymbol{x}= 0\) has only the trivial solution \(\boldsymbol{x}=\boldsymbol{0}\). Then \(A\boldsymbol{x}=\boldsymbol{b}\) has a unique solution.

  2. \(A\boldsymbol{x}=\boldsymbol{0}\) has a nontrivial solution, and so does \(A^*\boldsymbol{w}=\boldsymbol{0}\). In this cases,

    either (a) if \(⟨\boldsymbol{b}, \boldsymbol{w}⟩ = 0\) for all \(\boldsymbol{w}\) satisfying \(A^*\boldsymbol{w}=\boldsymbol{0}\). Then \(A\boldsymbol{x}=\boldsymbol{b}\) has a unique solution.

    or (b) \(⟨\boldsymbol{b}, \boldsymbol{w}⟩ \neq 0\) for some \(\boldsymbol{w}\) satisfying \(A^*\boldsymbol{w}=\boldsymbol{0}\). Then \(A\boldsymbol{x}=\boldsymbol{b}\) does not have a solution.

To apply the ideas to differential equations, we must define the adjoint operator.

2.3Adjoint operator and boundary conditions

The inner product of two functions \(u (x), v (x)\) defined on \((a, b)\) is

\[⟨u, v⟩ = \int_a^b u (x) \overline{v} (x) \mathrm{d} x \hspace{3cm} ̅=\text{complex conjugate}\]

Given a differential operator \(ℒ\) we define the adjoint \(ℒ^*\) by the relation

\[⟨ℒy, w⟩ = ⟨y, ℒ^*w⟩ \text{ for all } y, w\text{ that satisfy suitable boundary conditions}\]

For example, if \(ℒy = y''\) for \(a<x<b\),

\begin{align*}⟨ℒy, w⟩&=\int_a^b y'' w \mathrm{d} x\\&=[y' w]_a^b - \int_a^b y' w' \mathrm{d} x\\&=\underbrace{[y' w - yw']_a^b}_☹ + \underbrace{\int_a^b yw'' \mathrm{d} x}_{⟨y, ℒ^*w⟩}\end{align*}

We read off that \(ℒ^*w = w''\).

So this operator is self-adjoint.

The adjoint boundary conditions (BC)* are the conditions that \(w\) must satisfy so that the boundary terms ☹ vanish for any \(y\) that satisfies the homogeneous boundary conditions (BC).

Example. if we had \(y (a) = y (b) = 0\), then ☹\( = y' (b) w (b) -\cancel{y (b) w' (b)} - y' (a) w (a) + \cancel{y (a) w' (a)}\)

So we read off that we need \(w (a) = 0, w (b) = 0\).

In this case, boundary conditions are self-adjoint.

Alternatively, if \(y' (a) = 0, 3 y (a) - y (b) = 0\),

\begin{align*}☹&=y' (b) w (b) - y (b) w' (b) - y' (a) w (a) + y (a) w' (a)\\&=y' (b) w (b) - y (a) (3 w' (b) - w' (a))\end{align*}

So in this case we need \(w (b) = 0, 3 w' (b) - w' (a) = 0\).

These boundary conditions are not self-adjoint

If operator \(ℒ\) and boundary conditions (BC) are both self-adjoint, the problem is fully self-adjoint. If \(ℒ\) is self-adjoint but not (BC), it is formally self-adjoint.

For a general operator \(ℒy = P_2 y'' + P_1 y' + P_0 y\) and for general homogeneous boundary conditions \(B_1 y = 0, B_2 y = 0\) where \(B_1, B_2\) are linear combinations of \(y (a), y' (a), y (b), y' (b)\) we follow the same procedure

\begin{align*}⟨ℒy, w⟩&=\int_a^b (P_2 y'' + P_1 y' + P_0 y) w \mathrm{d} x\\&=\underbrace{[y' P_2 w - y (P_2 w)' + yP_1 w]_a^b}_☹+\underbrace{\int_a^by ((P_2 w)'' - (P_1 w)' + P_0 w) \mathrm{d} x}_{⟨y, ℒ^α w⟩}\end{align*}We read off that
\(ℒ^* w = (P_2 w)'' - (P_1 w)' + P_0 w\)

The boundary terms–a linear combination of \(y (a), y' (a), y (b), y' (b)\)–can be written as

\[☹ = (B_1 y) (K_1^*w) + (B_2 y) (K_2^*w) + (K_1 y) (B_1^*w) + (K_2 y) (B_2^*w)\]So we read off the homogeneous adjoint boundary conditions\[B_1^*w = 0, B_2^*w = 0.\]

Fredholm Alternative (ODE version)(FAT)

Theorem. Consider the inhomogeneous problem \(ℒy = f (x)\) with homogeneous boundary conditions \(B_1 y = 0, B_2 y = 0\).

Recall we can define an adjoint operator \(ℒ^*\) and homogeneous adjoint boundary conditions \(B_1^* w = 0, B_2^* w = 0\).

One of the following is true:

  • \(ℒy = 0\) with (BC) has only the trivial solution \(y = 0\). Then \(ℒy = f\) with (BC) has a unique solution.

  • \(ℒy = 0\) with (BC) has a non-trivial solution. Then so does \(ℒ^* w = 0\) with (BC)*.

    • \(⟨f, w⟩ = 0\) for all \(w\) satisfying \(ℒ^* w = 0\) with (BC)*. Then \(ℒy = f\) with (BC) has a non-unique solution.

    • \(⟨f, w⟩ \neq 0\) for some \(w\) satisfying \(ℒ^* w = 0\) with (BC)*. Then \(ℒy = f\) with (BC) has no solution.

Example. \(ℒy = y'' + y = f (x)\) for \(0<x<\fracπ2\) with \(y (0)=0, y' \left( \fracπ2 \right) = 0\)

In lecture 2 we tried to solve with variation of parameters.

Conditions on \(f\) for this to have a solution?

First find the adjoint problem.

\[⟨ℒy, w⟩ = \int_0^{\fracπ2} (y'' + y) w \mathrm{d} x = [y' w - yw']_0^{\fracπ2} + \int_0^{\fracπ2} y (w'' + w) \mathrm{d} x ≕ ⟨y, ℒ^* w⟩\]

\(ℒ^* w = w'' + w\), adjoint boundary conditions is \(w (0) = 0, w' \left( \fracπ2 \right) = 0\)

This \(ℒ^* w = 0\) has a non-trivial solution \(w (x) = \sin x\), so there will be a solution provided

\[⟨f, \sin x⟩ = \int_0^{\fracπ2} f (x) \sin x \mathrm{d} x = 0\]

which we found last week by constructing a solution and making it satisfy boundary conditions.

1 Inhomogeneous boundary conditions and FAT

The solvability condition arises from the fact that if there is to be a solution \(y (x)\) such that \(ℒy = f\) then \(⟨f, w⟩ = ⟨ℒy, w⟩ = ⟨y, ℒ^* w⟩ = 0\) for any \(w\) satisfying \(ℒ^* w = 0\)

The step \(⟨ℒy, w⟩ = ⟨y, ℒ^* w⟩\) is not true if \(y\) satisfies inhomogeneous boundary conditions (because of the boundary terms when integrating by parts).

Eg. if we have \(B_1 y = γ_1, B_2 y = γ_2\), then

\[⟨ℒy, w⟩ = ⟨y, ℒ^* w⟩ + \underbrace{(B_1 y)}_{γ_1} (K_1^* w) + \underbrace{(B_2 y)}_{γ_2} (K_2^* w) + (K_1 y) (B_1^* w) + (K_2 y) (B_2^* w)\]

from last lecture. If \(w\) satisfies \(B_1^* w = B_2^* w = 0\), then we have

\[⟨f, w⟩ = ⟨ℒy, w⟩ = ⟨y, ℒ^* w⟩ + γ_1 (K_1^* w) + γ_2 (K_2^* w)\]

So \(⟨f, w⟩ = γ_1 (K_1^* w) + γ_2 (K_2^* w)\).

Solvability condition for inhomogeneous problem

Example. \(ℒy = y'' + y = f (x), 0<x<\fracπ2\) with \(y (0) = 1, y' \left( \fracπ2 \right) = 0\). What condition must \(f\) satisfy?

As earlier, to find adjoint problem \(w'' + w = 0, w (0) = 0, w' \left( \fracπ2 \right) = 0\) which has solution \(w (x) = \sin x\).

For solvability condition, consider

\begin{align*} ⟨f, w⟩ &=\int_0^{\fracπ2} (y'' + y) w \mathrm{d} x\\ &=\int_0^{\fracπ2} y (w'' + w) \mathrm{d} x \underbrace{+ y' \left( \fracπ2 \right) w \left( \fracπ2 \right) - y \left( \fracπ2 \right) w' \left( \fracπ2 \right) - y' (0) w (0)}_{= 0} + y (0) w' (0)\\ \int_0^{\fracπ2} f (x) \sin x \mathrm{d} x &=1 \end{align*}

Example. \(ℒy = y'' + y' = f (x), 0<x<1\) with \(y' (0) = 0, y' (1) = α\)

For what values of \(α\) does this have a solution for a given \(f\)?

Find adjoint problem with \(y' (0) = 0, y' (1) =0\)

\begin{align*} ⟨ℒy, w⟩ &=\int_0^1 (y'' + y') w \mathrm{d} x\\ &=[y' w + yw]_0^1 + \int_0^1 - y' w' - yw' \mathrm{d} x\\ &=[y' w + yw - yw']_0^1 + \int_0^1 y \underbrace{(w'' - w')}_{ℒ^* w} \mathrm{d} x \end{align*}

\(y (1) (w (1) - w' (1)) - y (0) (w (0) - w' (0)) = 0\) for all \(y\)

\(ℒ^* w = w'' - w' = 0\) with \(w' (0) = w (0), w' (1) = w (1)\)

This has a non-trivial solution \(w (x) = \mathrm{e}^x\)

So we must consider, for \(w (x) = \mathrm{e}^x\),

\begin{align*} ⟨f, w⟩ &=\int_0^1 (y'' + y) w \mathrm{d} x\\ &=[y' w + yw - yw']_0^1 + \int_0^1 y (w'' - w') \mathrm{d} x\\ &=y' (1) w (1) - y' (0) w (0)\\ \int_0^1 f (x) \mathrm{e}^x \mathrm{d} x &=\alpha \mathrm{e}\\ ⇒\quad \alpha &=\int_0^1 f (x) \mathrm{e}^{x - 1} \mathrm{d} x \end{align*}

Note. We could have solved this directly by writing \(v = y'\), so \(v' + v = f (x)\), with \(v (0) = 0\).

Using an integrating factor \(\mathrm{e}^x\), this has solution\[\tag1v (x) = \int_0^x f (ξ) \mathrm{e}^{ξ - x} \mathrm{d} ξ\]This gives \(y' (1) = v (1) = \int_0^1 f (ξ) \mathrm{e}^{ξ - 1} \mathrm{d} ξ\), which recovers the solvability condition.

Integrating (1) gives \(y (x) = \int_0^x(\int_0^x f (ξ) \mathrm{e}^{ξ - s}) \mathrm{d} s \mathrm{d} ξ=\int_0^x f (ξ) \underbrace{(1 - \mathrm{e}^{ξ-x})}_\text{Green's function}\mathrm{d} ξ + C\)


Green's function

Consider \(ℒy = P_2 y'' + P_1 y' + P_0 y = f (x)\) on \(a<x <b\), with homogeneous boundary conditions \(y (a) = 0, y (b) = 0\).

Using variation of parameters we found the solution
\(y (x) = \int_a^b g (x, ξ ) f ( ξ ) \mathrm{d} ξ \)
where\[\tag1 g (x, ξ ) =\begin{cases}\frac{y_1 ( ξ ) y_2 (x)}{P_2 ( ξ ) W ( ξ )} & a< ξ <x<b,\\ \frac{y_2 ( ξ ) y_1 (x)}{P_2 ( ξ ) W ( ξ )} & a<x< ξ <b. \end{cases}\]

Here \(y_1\) and \(y_2\) are linearly independent solutions of the homogeneous ODE \(ℒy = 0\) satisfying one boundary condition each, i.e. \(y_1 (a) = 0, y_2 (b) = 0\).

The function \(g (x, ξ )\) is called a Green's function. The operation of multiplying by \(g\) and integrating can be thought of the inverse of the operator \(ℒ\).

Delta function

The delta function \(δ(x)\) has the properties that \(δ(x) = 0\) for all \(x \neq 0\) but \(\int_{- ∞}^{∞} δ(x) \mathrm{d} x = 1\).

This is not strictly a function, but rather a distribution, and is defined by its action on smooth test functions \(ϕ(x)\):

\[\int_{- ∞}^{∞} δ(x - ξ ) ϕ( ξ ) \mathrm{d} ξ = ϕ(x)\]

This is also called the sifting property.

\(δ(x)\) can be thought of as the derivative of the Heaviside function \(H (x) =\begin{cases}0 & x<0\\ 1 & x⩾0\end{cases}\)

Aside: \(δ(x)\) as be thought of the limit of an approximating sequence \(δ_n (x)\) as \(n→∞\).

Eg. \(δ_n (x) =\begin{cases}0 & | x | > \frac{1}{n}\\ \frac{n}{2} & | x |⩽\frac{1}{n}\end{cases}\) or \(δ_n (x) = \frac{n}{\sqrtπ} \mathrm{e}^{- n^2 x^2}\)

Delta-function construction of the Green's function

If we are to write the solution of \(ℒy = f\) as \(y (x) = \int_a^b g (x, ξ ) f ( ξ ) \mathrm{d} ξ \), then we need \(ℒy = \int_a^b ℒ_x g (x, ξ ) f ( ξ ) \mathrm{d} ξ = f (x)\). So we need

\(ℒ_x g (x, ξ ) = δ(x - ξ )\)
(*)

For \(y (x)\) to satisfy the boundary conditions \(B_1 y = 0\) and \(B_2 y = 0\), we need \(g (x, ξ )\) to satisfy the same condition (as a function of \(x\))
\(B_1 g = 0, B_2 g = 0\)

The Green's function can be thought of as the solution for a point force at \(x = ξ \).

To actually solve (*), note that \(δ(x - ξ ) = 0\) except at \(x = ξ \), so we can use the solution of the homogeneous equation
\(ℒ_x g = 0\)
on each interval \(a<x< ξ \) and \( ξ <x<b\).

We must also ensure that \(g\) is continuous at \(x = ξ \) [else \(g_x\) would be a delta function, and \(g_{x x}\) would have a worse singularity that wouldn't balance in (*)]. But \(g_x\) would be discontinuous at \(x = ξ \). Integrating (*) over \([ ξ - ε, ξ + ε]\):

\[\underbrace{\int_{ ξ - ε}^{ ξ + ε} P_2 g_{x x} \mathrm{d} x}_{\xrightarrow[ε→0]{} P_2 [g_x]_{ ξ -}^{ ξ +}} + \underbrace{\int_{ ξ - ε}^{ ξ + ε} P_1 g_x + P_0 g \mathrm{d} x}_{\xrightarrow[ε→0]{} 0} = \underbrace{\int_{ ξ - ε}^{ ξ + ε} δ(x - ξ ) \mathrm{d} ξ }_1\]So we need
\([g_x]_{ ξ -}^{ ξ +} = \frac{1}{P_2}\)
and \([g]_{ ξ -}^{ ξ +} = 0\)

Check (1) satisfies those conditions. But why don't we find \(g\) by solving them directly?

Example. \(ℒg = - y''\) on \(0<x<1, y (0) = y (1) = 0\)

The Green's function satisfies \(- g_{x x} = δ(x - ξ )\) with \(g (0, ξ ) = 0, g (1, ξ ) = 0\).

So \(g (x, ξ ) =\begin{cases}Ax & x< ξ \\ B (1 - x) & x > ξ \end{cases}\)(\(A, B\) may depend on \( ξ \))

Continuity of \(g\) at \( ξ ⇒A ξ = B (1 - ξ )\) and we need \(- [g_x]_{ ξ -}^{ ξ +} = 1⇒B + A = 1\)

Hence \(B = ξ , A = 1 - ξ \). So \(g (x, ξ ) =\begin{cases}x (1 - ξ ) & x< ξ \\ ξ (1 - x) & x > ξ \end{cases}\)

Then \(ℒy = f\) has solution \(y (x) = \int_0^1 g (x, ξ ) f ( ξ ) \mathrm{d} ξ \)

Example. Find the Green's function for

\[y'' + y = f (x) \quad 0<x<\fracπ2 \quad y (0) = 0 \quad y \left( \fracπ2 \right) = 0\]

\(g (x, ξ )\) satisfies \(g_{x x} + g = δ(x - ξ )\) with \(g (0, ξ ) = 0\) and \(g \left( \fracπ2, ξ \right) = 0\). So \(g (x, ξ ) =\begin{cases}A \sin x & 0<x< ξ \\ B \cos x & ξ <x<\fracπ2\end{cases}\)

Continuity at \( ξ ⇒A \sin ξ = B \cos ξ \)

and we need \([g_x]_{ ξ -}^{ ξ +} = 1⇒- B \sin ξ - A \cos ξ = 1⇒B = - \sin ξ , A = - \cos ξ \)

So \(g (x, ξ ) =\begin{cases}- \cos ξ \sin x & 0<x < ξ <\fracπ2\\ - \sin ξ \cos x & 0< ξ <x< \fracπ2\end{cases}\)

which is the same as what we found using variation of parameters last week.


For the problem \(ℒy=f, B_1 y=0, B_2 y=0\) we find the Green's function \(g (x, ξ)\) such that \(y (x)=\int_a^b g (x, ξ) f (ξ) \mathrm{d} ξ\), by solving \(ℒ_x g (x, ξ)=δ(x-ξ), B_1 g=0, B_2 g=0\)

Remark. If the homogeneous problem \(ℒy=0, B_1 y=B_2 y=0\) has a non-trivial solution, then no Green's function exists—something goes wrong in finding \(g\). (It is possible to construct a modified Green's function in such cases)

Remark. If we have inhomogeneous boundary conditions on \(y\) (\(B_1 y=γ_1, B_2 y=γ_2\)), we still use homogeneous conditions to find \(g (x, ξ)\), and then add on multiples of the solutions to the homogeneous equation (\(ℒy=0\))—a complementary function—to satisfy the inhomogeneous conditions on \(y\).

Remark. These ideas generalise to higher order equations. If \(ℒy=\sum_{k=0}^n P_k (x) y^{(k)} (x)\) with \(P_n≠0\) with \(n\) boundary conditions [combinations of \(y (a),…,y^{(n-1)} (a), y (b),…,y^{(n-1)} (b)\)] we define the Green's function to satisfy \(ℒ_x g (x, ξ)=δ(x-ξ)\) with the same homogeneous boundary conditions as \(y\). In this case \(g,…,g^{(n-2)}\) will be continuous and \([g^{(n-1)}]_{ξ-}^{ξ+}=\frac{1}{P_n (ξ)}\)

Remark. We can also define an adjoint Green's function \(G (x, ξ)\), which satisfies \(ℒ^*_x G (x, ξ)=δ(x-ξ), B_1^*G=0, B_2^*(G)=0\).

It turns out that \(G (x, ξ)=g (ξ, x)\). To see this, consider

\[\int_a^b G (x, ξ) f (x) \mathrm{d} x=⟨f, G⟩=⟨ℒy, G⟩=⟨y, ℒ^*G⟩=\int_a^b y (x) δ(x-ξ) \mathrm{d} x=y (ξ)\]

relabel \(x\) and \(ξ\), \(y (x)=\int_a^b G (ξ, x) f (ξ) \mathrm{d} ξ\)

Comparing with earlier expression \(y (x)=\int_a^b g (x, ξ) f (ξ) \mathrm{d} ξ\), we get \(G (ξ, x)=g (x, ξ)\)

This shows that if \(ℒ\) is self-adjoint, then \(g (x, ξ)\) is symmetric in \(x, ξ\).

4 Eigenfunction expansions

Another method for solving linear differential equations \(ℒy=f\) is to seek the solution as a superposition of basis functions \(y (x)=\sum_{n=0}^∞c_n y_n (x)\) where \(\{ y_n (x) \}\) is set of linearly independent functions that span the space of ‘reasonable’ functions amongst which we seek the solution.

It turns out to be convenient to take the set to be the eigenfunctions of \(ℒ\).

Eigenvalues and eigenfunction for a differential equation

Given an operator \(ℒ\) and boundary operators \(B_1, B_2\), the problem \(ℒy=λy\) together with homogeneous boundary conditions \(B_1 y=B_2 y=0\), will have non-trivial solutions for certain values of \(λ\). These values are the eigenvalues \(λ_n\), the corresponding solutions are the eigenfunctions \(y_n (x)\).

For the problems we consider, the eigenvalues are real, and form a discrete countable set, with each pair \((y_n (x), λ_n)\) satisfy \(ℒy_n=λ_n y_n\) and \(B_1 y_n=B_2 y_n=0\).

Example. If \(ℒy=y'' + y, 0<x<1, y (0)=0=y (1)\), the eigenvalues and eigenfunctions satisfy \(y'' + y=λy\) which has solutions \(y_n (x)=\sin nπx\) when \(λ_n=1-n^2π^2\) (Sometimes we define the eigenvalues/eigenfunctions to satisfy \(ℒy=λr (x) y\) where \(r (x)\) is a weighing function).

Adjoint eigenvalues and eigenfunctions

The adjoint operator \(ℒ^*\) has the same eigenvalues, and eigenfunctions \(w_n\) satisfying \(ℒ^*w_n (x)=λ_n w_n (x)\) with \(B_1^*w_n=B_2^*w_n=0\).

Proposition. The eigenfunctions and adjoint eigenfunctions correspond to different eigenvalues are orthogonal i.e. \(⟨y_j (x), w_k (x)⟩=0\) if \(j≠k\).

Proof. \(⟨ℒy_j, w_k⟩=⟨y_j, ℒ^*w_k⟩⇒⟨λ_j y_j, w_k⟩=⟨y_j, λ_k w_k⟩⇒(λ_j-λ_k) ⟨y_j, w_k⟩=0⇒\text{if } λ_j≠λ_k \text{ then } ⟨y_j, w_k⟩=0\). \(\Box\)


For a problem \(ℒy=f\) on \(a<x<b\) with \(B_1 y=0,B_2 y=0\), we define eigenvalues \(λ_k\) and eigenfunctions \(y_k (x)\) satisfy \(ℒy_k=λ_k y_k,B_1 y_k=B_2 y_k=0\).

We also define adjoint eigenfunctions \(w_k (x)\) which satisfy \(ℒ^*w_k=λ_k w_k,B_1^*w_k=B_2^*w_k=0\).

The eigenfunctions satisfy the orthogonality condition \(⟨y_j,w_k⟩=0\) if \(j \neq k\)

We seek a solution to \(ℒy=f\), by writing \(y (x)=\sum_{n=1}^∞c_n y_n (x)\).

Taking inner product with \(w_k\) shows

\[⟨y,w_k⟩=\sum_{n=1}^∞c_n ⟨y_n,w_k⟩=c_k ⟨y_k,w_k⟩\]

Also taking inner product of \(ℒy=f\) with \(w_k\)

\[⟨f,w_k⟩=⟨ℒy,w_k⟩=⟨y,ℒ^*w_k⟩=⟨y,λ_k w_k⟩=λ_k ⟨y,w_k⟩=λ_k c_k ⟨y_k,w_k⟩⇒\boxed{c_k=\frac{⟨f,w_k⟩}{λ_k ⟨y_k,w_k⟩}}\]

Note. See later for what to do if the problem has inhomogeneous boundary conditions.

Note. The formula for \(c_k\) doesn't work if \(λ_k=0\). In that case we must have \(⟨f,w_k⟩=0\) to avoid inconsistency. If this does hold, then the corresponding \(c_k\) is arbitrary. This is a manifestation of the Fredholm alternative. (an eigenfunction corresponding to \(λ_k=0\) is a solution of the homogeneous problem)

Sturm-Liouville problems

A particularly nice form of operator is when \(ℒy=(- py')'+qy\), for functions \(p (x),q (x)>0\).

Such an operator is self-adjoint.(See problem sheet 1)

If the boundary conditions take the separated form

\begin{align*}B_1 y &=α_1 y (a)+α_2 y' (a)\\B_2 y &=β_1 y (b)+β_2 y' (b) \end{align*}

then the whole problem is self-adjoint.

In fact, any 2nd-order operator ℒ can be converted to this form by multiplying by an integrating factor i.e. \(ℒy=f (x)\) then we can write this as \(\hat{ℒ} y=r (x) ℒy=r (x) f (x)\) where \(\hat{ℒ}\) is of Sturm-Liouville form. The function \(r (x)\) is called a ‘weighting function’.

For a problem with this form, we should modify the definition of the eigenvalues and eigenfunctions to \(\hat{ℒ} y_k=λ_k r (x) y_k\) As a consequence, the orthogonality condition becomes \(⟨y_j,ry_k⟩=0\) i.e. \(\int_a^b y_j (x) r (x) y_k (x) \mathrm{d} x=0\)

(sometimes a new inner product \(⟨y_j,y_k⟩_r\) is defined to mean this weighted integral)

If we seek the solution to \(\hat{ℒ} y=rf\), as \(y (x)=\sum c_n y_n (x)\), then taking inner product with \(y_k\)

\[⟨rf,y_k⟩=⟨\hatℒy,y_k⟩=⟨y,\hatℒy_k⟩=⟨y,λ_k ry_k⟩=λ_k ⟨y,ry_k⟩=λ_k c_k ⟨y_k,ry_k⟩⇒\boxed{c_k=\frac{⟨f,ry_k⟩}{λ_k ⟨y_k,ry_k⟩}}\]

To find the solution of a non-self-adjoint problem \(ℒy=f (x)\) either

  1. find adjoint eigenfunctions and use \(c_k=\frac{⟨f,w_k⟩}{λ_k ⟨y_k,w_k⟩}\)
  2. convert to Sturm-Liouville form and use \(c_k=\frac{⟨f,ry_k⟩}{λ_k ⟨y_k,ry_k⟩}\)

Regular Sturm-Liouville problem

The problem \(ℒy=(- py')'+qy=λ ry\) is regular if \(p,r>0,q≥0\) for \(a≤x≤b\). In this case it can be shown that the eigenvalues are real, and form a discrete countable set, that set can be ordered in the form \(λ_1<λ_2<⋯<λ_k<⋯\) with \(\lim_{k→∞} λ_k=∞\).

If \(α_1 α_2≤0,β_1 β_2≥0\) in the separated boundary conditions then all the eigenvalues are positive, which we can see by considering the Rayleigh quotient.

\[⟨y_k,\hatℒy_k⟩=⟨y_k,λ_k ry_k⟩=λ_k ⟨y_k,ry_k⟩\]\[\int_a^b y_k ((- py_k')'+qy_k) \mathrm{d} x=[- py_k' y_k]_a^b+\int_a^b y_k' py_k'+qy_k y_k \mathrm{d} x=[- py_k' y_k]_a^b+\int_a^b p (y_k')^2+q (y_k)^2 \mathrm{d} x\]\[⇒ λ_k=\frac{[- py_k' y_k]_a^b+\int_a^b p (y_k')^2+q (y_k)^2 \mathrm{d} x}{\int_a^b r (y_k)^2 \mathrm{d} x}>0\]

Singular Sturm-Liouville problems

Consider the Sturm-Liouville problem \(ℒy=(- py')'+qy\) where \(p(x), q(x)\) are some functions on \(a⩽x⩽b\). The problem is singular if \(p(x)>0\) for \(x∈(a, b)\) but \(p(a)\) or \(p(b)\) or both is zero i.e. coefficients of highest derivative goes to zero.

In this case there is no need for a boundary condition where \(p(x)=0\), except to say that \(y(x)\) is bounded.

Singular points where \(p(x)=0\) often define a natural domain for an operator e.g. Legendre's equation \(ℒy=(1-x^2) y''-2 xy'+l(l+1) y=0\) has \(p(x)=x^2-1\) [arises when solving Laplace's equation in spherical coordinates, \(x=\cosθ⇒x∈[-1, 1]\) and singular points \(±1\) corresponds to the singular points of the coordinate system, the north and the south pole.] This has natural domain \([-1, 1]\)

Eigenfunction expansions and Inhomogeneous boundary conditions

Consider solving \(ℒy=f\) on \(a<x<b\), with \(B_1 y=γ_1, B_2 y=γ_2\)

  • still have eigenfunctions \(y_k(x)\) to satisfy \(ℒy_k=λ_k y_k\) with \(B_1 y_k=B_2 y_k=0\)

    and \(w_k\) satisfy \(ℒ^*w_k=λ_k w_k, B_1^*w_k=B_2^*w_k=0\)

  • If there is no zero eigenvalue (case 1 of Fredholm alternative), we can

    1. use eigenfunctions to solve homogeneous BC problem and add complementary function to satisfy inhomogeneous BCs

    2. modify the eigenfunction coefficients to incorporate inhomogeneous BCs directly.

    3. find some function \(v(x)\) that satisfies \(B_1 v=γ_1, B_2 v=γ_2\). Then write \(y(x)=v(x)+\tilde{y}(x)\) so \(\tilde{y}(x)\) satisfies \(ℒ \tilde{y} =ℒy -ℒv=f(x) -ℒv(x)\) with \(B_1 \tilde{y}=B_2 \tilde{y}=0\).

  • If there is a zero eigenvalue (case 2 of Fredholm alternative), we shuld use (ii) or (iii)

The modification of the eigenfunctions expansion is that if \(y(x)=\sum_{n=1}^∞c_n y_n(x)\), we need

\(⟨y, w_k⟩=c_k ⟨y_k, w_k⟩\) and \(⟨f, w_k⟩=⟨ℒy, w_k⟩=⟨y, ℒ^*w_k⟩+\text{boundary terms}=λ_k ⟨y, w_k⟩+\text{boundary terms}\)

\[\boxed{c_k=\frac{⟨f, w_k⟩-\text{boundary terms}}{λ_k ⟨y_k, w_k⟩}}\]

Example. \(y''=f(x)\) on \(0<x<1\) with \(y(0)=α, y(1)=β\).

Eigenvalues/eigenfunction satisfy \(y''=λy\) with \(y(0)=0, y(1)=0\)

\(\Rightarrow\)
\(\boxed{λ_n=- n^2π^2, y_n(x)=\sin(nπ x), n∈ℤ^+}\)

This problem is self-adjoint, so \(w_n(x)=y_n(x)\)

Using (i) or (iii), let \(v(x)=α +(β-α) x\), and \(y(x)=v(x)+\tilde{y}(x)\)

So \(\tilde{y}\) satisfies \(\tilde{y}''=f(x)\) with \(\tilde{y}(0)=\tilde{y}(1)=0\)

This has solution \(y(x)=\sum_{k=1}^∞c_k y_k(x)\) with \(c_k=\frac{⟨f, y_k⟩}{λ_k ⟨y_k, y_k⟩}\)

\(⇒y(x)=α +(β-α) x+\sum_{k=1}^∞\frac{⟨f, y_k⟩}{λ_k ⟨y_k, y_k⟩} \sin kπ x\)

Using (ii), \(y(x)=\sum_{k=1}^∞c_k y_k(x)\) and we find \(c_k\) from

\[⟨f, y_k⟩=⟨y'', y_k⟩=\int_0^1 y'' y_k \mathrm{d} x=\underbrace{[y' y_k-yy_k']_0^1}_{-βkπ(-1)^k+α kπ}+\underbrace{\int_0^1 y \underbrace{y''_k}_{λ_k y_k}\mathrm{d} x}_{λ_k c_k ⟨y_k, y_k⟩}\]

\(⇒\boxed{c_k=\frac{1}{λ_k ⟨y_k, y_k⟩}(⟨f, y_k⟩+βkπ(-1)^k-α kπ)}\) This does give the same answer [extra terms is Fourier series of $α +(β-α) x$].

Singular points of ODE

Consider the equation \(P_2(x) y''(x)+P_1(x) y'(x)+P_0(x) y(x)=0\)

  • A point \(x_0\) is an ordinary point if \(\frac{P_1(x)}{P_2(x)}\) and \(\frac{P_0(x)}{P_2(x)}\) are analytic in some neighborhood of \(x_0\)

  • A point \(x_0\) is a singular point if \(\frac{P_1(x)}{P_2(x)}\) or \(\frac{P_0(x)}{P_2(x)}\) are not analytic there.

  • It is a regular singular point if \((x-x_0)\frac{P_1(x)}{P_2(x)},(x-x_0)^2 \frac{P_0(x)}{P_2(x)}\) are analytic near \(x_0\) i.e. the singularities are not too bad.

To illustrate these, consider \(x^m y'(x)-λy(x)=0\)

If \(m=0\), \(x=0\) is an ordinary point \(y(x)=Ae^{λx}\) which is analytic near \(x=0\)

If \(m=1\), \(x=0\) is a regular singular point (\(\fracλ{x}\) is not analytic but \(\fracλ{x} x\) is analytic) \(y(x)=Ax^λ\)

Depending on \(λ\), this may be well-behaved (\(λ\) is non-negative integer), or it may be singular (a pole or branch point)

If \(m=2\), \(x=0\) is a irregular singular point (\(\fracλ{x^2} x\) is not analytic) by separating variables \(y(x)=Ae^{- \fracλ{x}}\) which has an essential singularity at \(x=0\).


Singular points of ODEs (cont.)

The point \(x=∞\), is classified by writing \(x=\frac{1}{t}\), and \(y(x)=u(t)\), converting to an ODE for \(u\), and classify the behavior of \(x=∞\) according to the behavior at \(t=0\).

Example. (Cauchy-Euler equation)\(x^2 y''+axy'+by=0\)

\(x=0\) is a regular singular point, since \(\frac{ax}{x^2}, \frac{b}{x^2}\) are not analytic, but \(x⋅\frac{ax}{x^2}, x^2⋅\frac{bx}{x^2}\) are analytic.

To classify \(x=∞\), write \(x=\frac{1}{t}, y(x)=u(t)\), so \(y'(x)=-\frac{1}{x^2} \dot{u}, y''(t)=\frac{2}{x^3} \dot{u}+\frac{1}{x^4} \ddot{u}\)

Plugging in, \(t^2 \ddot{u}+2 t \dot{u}-at \dot{u}+bu=0\)

\(\frac{2 t-at}{t^2}\) is not analytic at \(t=0\), neither is \(\frac{b}{t^2}\), so \(t=0\) is singular, but \(t⋅\frac{2 t-at}{t^2}\) and \(t^2⋅\frac{b}{t^2}\) are analytic. So \(t=0\), and hence \(x=∞\), is a regular singular point.

5 Series solutions

We seek solutions to linear homogeneous equations in the form of a power series around a point \(x_0\).

  • If \(x_0\) is an ordinary point, the solution will be analytic, and we write \(y(x)=\sum_{k=0}^∞a_k(x-x_0)^k\)

    We substitute into the ODE and compare coefficients of each power of \(x-x_0\) to find the constants \(a_k\)

    Example. \((1+x^2) y'+2 xy=0\)

    Find a series solution, around \(x=0\).

    Substitute in \(y(x)=\sum_{k=0}^∞a_k x^k\), \(y'(x)=\sum_{k=1}^∞ka_k x^{k-1}\):

    \[\sum_{k=1}^∞ka_k x^{k-1}+\sum_{k=0}^∞ka_k x^{k+1}+\sum_{k=0}^∞2 a_k x^{k+1}=0\]

    In the first term, decrease \(k\) by 2:

    \[a_1+\sum_{k=0}^∞(k+2) a_{k+2} x^{k+1}+\sum_{k=0}^∞ka_k x^{k+1}+\sum_{k=0}^∞2 a_k x^{k+1}=0\]

    Set coefficient of every power of \(x\) to be 0:

    \[a_1=0, a_{k+2}=-a_k \text{ for } k > 0\]

    Hence \(a_k=0\) for odd \(k\), and for even \(k\) we have \(a_{2 n}=(-1)^n a_0\)

    The solution \(y(x)=\sum_{n=0}^∞(-1)^n a_0 x^{2 n}=\frac{a_0}{1+x^2}\), as we can see is true for the original equation

    The series converges for \({|x|}<1\) but the closed form solution \(\frac{a_0}{1+x^2}\) exists for all \(x\).

  • If \(x=0\) is a regular singular point, we try a solution of the form \(y(x)=(x-x_0)^α\sum_{k=0}^∞a_k(x-x_0)^k\) where \(α\) is to be determined (sometimes not integer) and \(a_0 \neq 0\)

    This method is known as the Frobenius method

    Example. (Bessel equation)\(4 x^2 y''+4 xy'+(4 x^2-1) y=0\)

    \(x=0\) is a regular singular point.

    We write \(y(x)=x^α\sum_{k=0}^∞a_k x^k\) with \(a_0 \neq 0\)

    Substitute into the equation

    \[\sum_{k=0}^∞4(α+k)(α+k-1) a_k x^{α+k}+\sum_{k=0}^∞4(α+k) a_k x^{α+k} {\color{red}{+ \sum_{k=0}^∞4 a_k x^{α+k+2}}}-\sum_{k=0}^∞a_k x^{α+k}=0\]

    In the red term, increase \(k\) by 2:

    \[\underbrace{[4 α(α-1)+4 α-1]}_{F(α)} a_0 x^{α}+[4(α+1) α+4(α+1)-1] a_1 x^{α+1}+\sum_{k=2}^∞\{ [4(α+k)(α+k-1)+4(α+k)-1] a_k+4 a_{k-2} \} x^{α+k}=0\]

    The coefficient of the lowest power of \(x\) determines \(α\). It is called the indicial equation. \(F(α)=0\). In this case \(F(α)=4 α^2-1 \Rightarrow α=\pm \frac{1}{2}\)

    • For \(α=\frac{1}{2}\), the other terms become

      \[8 a_1 x^{\frac{3}{2}}+\sum_{k=2}^∞\left\{ \left[ 4 \left( k+\frac{1}{2} \right)^2-1 \right] a_k+4 a_{k-2} \right\} x^{\frac{1}{2}+k}=0\]

      So we need \(a_1=0\) and \(4 k(k+1) a_k=-4 a_{k-2} \Rightarrow a_k=-\frac{1}{k(k+1)} a_{k-2}\) for \(k \geqslant 2\)

      This recurrence relation shows that \(a_k=0\) for odd \(k\)

      For \(k=2 n\), \(a_{2 n}=\frac{(-1)^n}{(2 n+1) !} a_0\)

      So the solution is \(a_0 x^{\frac{1}{2}} \sum_{k=0}^∞\frac{(-1)^n}{(2 n+1) !} x^{2 n}\)

    • For \(α=-\frac{1}{2}\) we try the same procedure. The other terms become

      \[0 a_1 x^{\frac{1}{2}}+\sum_{k=2}^∞\left\{ \left[ 4 \left( k-\frac{1}{2} \right)^2-1 \right] a_k+4 a_{k-2} \right\} x^{-\frac{1}{2}+k}=0\]

      So \(a_1\) is undetermined, and \(4 k(k-1) a_k=-4 a_{k-2}\)

      So \(a_k=-\frac{1}{k(k-1)} a_{k-2}\) for \(k \geqslant 2\)

      \[a_{2 n+1}=\frac{(-1)^n}{(2 n+1) !} a_1, a_{2 n}=\frac{(-1)^n}{(2 n) !} a_0\]

      So

      \[y(x)=a_0 x^{-\frac{1}{2}} \sum_{k=0}^∞\frac{(-1)^n}{(2 n) !} x^{2 n}+a_1 x^{-\frac{1}{2}} \sum_{n=0}^∞\frac{(-1)^n}{(2 n+1) !} x^{2 n+1}=a_0 \frac{\cos x}{x^{\frac{1}{2}}}+a_1 \frac{\sin x}{x^{\frac{1}{2}}}\]

Frobenius method(general case)

Consider \(y''+\frac{p (x)}{x-x_0} y'+\frac{q (x)}{(x-x_0)^2} y=0\)

For a regular singular point \(p (x)\) and \(q (x)\) are analytic in a neighborhood of \(x_0\), so we can write \(p (x)=\sum_{j=0}^∞p_j (x-x_0)^j, q (x)=\sum_{j=0}^∞q_j (x-x_0)^j\)

Seek solutions as \(y (x)=\sum_{k=0}^∞a_k (x-x_0)^{α+k}\)

Inserting into the differential equation, and sort into powers of \((x-x_0)\)

\[\underbrace{[α (α-1)+α p_0+q_0]}_{F (α)} a_0 (x-x_0)^{α-2}+\sum_{k=1}^∞\mmlToken{mi}☺_k (x-x_0)^{α+k-2}=0\]

The lowest power of \((x-x_0)\) determines the indicial equation \(F (α)=0\). This has roots \(α_1, α_2\), which we order so that \(\operatorname{Re} (α_1) \geqslant \operatorname{Re} (α_2)\).

  • One solution \(y_1 (x)\) will always come from taking \(α=α_1\). Looking at the coefficient of \((x-x_0)^{α+k-2}\), we need

    \[\mmlToken{mi}☺_k=F (α+k) a_k+\sum_{j=1}^k [(α+k-j) p_j+q_j] a_{k-j}=0\]

    This gives a recurrence relation to determine \(a_k\) in terms of the previous \(a_i\).

    \[a_k =-\frac{1}{F (α+k)} \sum_{j=1}^k [(α+k-j) p_j+q_j] a_{k-j}\]

    So we have solution \(y_1 (x)=\sum_{k=0}^∞a_k (x-x_0)^{α_1+k}\)

  • If \(α_1-α_2 \not\in ℤ\), we can do the same thing with \(α=α_2\) to give \(y_2 (x)=\sum_{k=0}^∞b_k (x-x_0)^{α_2+k}\) where

    \[b_k =-\frac{1}{F (α_2+k)} \sum_{j=1}^k [(α_2+k-j) p_j+q_j] a_{k-j}\]
  • If \(α_1=α_2\), there is only one solution of the form we guessed. A second solution has form

    \[y_2 (x)=y_1 (x) \log (x-x_0)+\sum_{k=0}^∞b_k (x-x_0)^{α_2+k}\tag{*}\]
  • If \(α_1-α_2=N∈ℤ\), then we might be able to find a second solution in the original form, or we might need a second solution of the form (*)

    To see this, note that with \(α=α_2\), the coefficient of \((x-x_0)^{α_2+N-2}\) requires

    \[\mmlToken{mi}☺_N=\underbrace{F (α_2+N)}_{F (α_1)=0} a_N+\underbrace{\sum_{j=1}^N [(a_2+N-j) p_j+q_j] a_{N-j}}_{\mmlToken{mi}☹}=0\]

    If \(\mmlToken{mi}☹≠0\), we have an inconsistency, so we must have the second type of solution in (*).

    If \(\mmlToken{mi}☹=0\), there is no inconsistency, and \(a_N\) is undetermined (we can take \(a_N=0\) wlog)

Example. \(x (x-1) y''+3 xy'+y=0\)

\(x=0\) is a regular singular point, so seek a solution as \(y (x)=\sum_{k=0}^∞a_k x^{α+k}, a_0≠0\)

\[\sum_{k=0}^∞(α+k) (α+k-1) a_k x^{α+k}-\sum_{k=0}^∞(α+k) (α+k-1) a_k x^{α+k-1}+\sum_{k=0}^∞3 (α+k) a_k x^{α+k}+\sum_{k=0}^∞a_k x^{α+k}=0\]

Pull out the first term \(- α (α-1) a_0 x^{α-1}\)

\[\underbrace{- α (α-1)}_{\substack{\text{indicial equation}\\ F (α)=α (α-1)=0\\⇒α=0, 1}} a_0 x^{α-1}+\sum_{k=0}^∞\underbrace{\{ [(α+k) (α+k-1)+3 (α+k)+1] a_k-(α+k+1) (α+k) a_{k+1} \}}_{(α+k+1)^2 a_k-(α+k+1) (α+k) a_{k+1}} x^{α+k}\]

With \(α=1\), we have recurrence relation

\[(α+k) a_{k+1}=(α+k+1) a_k \quad \text{for } k \geqslant 0⇒a_{k+1}=\frac{k+2}{k+1} a_k⇒a_k=(k+1) a_0\]

So \(y_1 (x)=a_0 x \sum_{k=0}^∞(k+1) x^k=a_0 \frac{x}{(1-x)^2}\)

With \(α=0\), the recurrence relation \(ka_{k+1}=(k+1) a_k\) for \(k \geqslant 0\)

This immediately goes wrong for \(k=0\). So we don't have a solution for \(α=0\). Instead the solution must be of the form \(y_2 (x)=y_1 (x) \log x+\sum_{k=0}^∞b_k x^k\)

Substitute into the equation, compare powers of \(x\) and set all coefficients to be 0, giving \(b_k\)

Example. \(\sin^2 xy''-\sin x \cos xy'+y=0\)

Find the form of solutions near \(x=0\), which is a regular singular point.

Try \(y (x)=\sum_{k=0}^∞a_k x^{α+k}, a_0≠0\)

Noting that \(\sin^2 x=x^2+⋯, \sin x \cos x=x+⋯\)

So substituting into the equation, we find

\[α (α-1) a_0 x^α-α a_0 x^α+a_0 x^α+\text{higher powers of } x=0\]

\(⇒\)indicial equation \(α (α-1)-α+1=0\) ie. \((α-1)^2=0⇒α=1\) repeated root

So one solution will have form \(y_1 (x)=x \sum_{k=0}^∞a_k x^k\)

and the other will have form \(y_2 (x)=y_1 (x) \log x+x \sum_{k=0}^∞b_k x^k\)

Near \(x=0\) these behave as \(y_1 (x)≍a_0 x, y_2 (x)≍a_0 x \log x\)


1 Special Functions

1.1 Bessel's equation

Arises from separating variables in Laplace's equation in cylindrical coordinates

\[x^2 y''+xy'+(x^2-n^2) y=0\]

Bessel's equation of order \(n\) (\(n\) often an integer).

This has linear independent solution

\[J_n (x), Y_n (x)\]

We can seek power series expansions of these around \(x=0\) which is a regular singular point.

Writing \(y (x)=\sum_{k=0}^∞a_k x^{k+α}\) substituting in the equation gives indicial equation \(F (α)=α^2-n^2=0⇒α=±n\)

For \(α=n\), find \(a_1=0\), and \(a_k=-\frac{1}{k (k+2 n)} a_{k-2}\) for \(k≥2\) (problem sheet) and hence

\[y_1 (x)=J_n (x)=\left(\frac{x}{2}\right)^n \sum_{k=0}^∞\frac{(-1)^k}{k! (k+n) !} \left(\frac{x}{2}\right)^{2 k}\]

These are Bessel functions of the first kind.

For \(α=-n\), we find \(a_1=0\), and \(k (k-2 n) a_k=-a_{k-2}\) for \(k≥2\). This results in an inconsistency for \(k=2 n\). So instead the second solution has form

\[y_2 (x)=y_1 (x) \log x+\sum_{k=0}^∞b_k x^{-n+k}\]

With a certain choice of normalisation, this gives Bessel functions of the second kind

\[Y_n (x)=\frac2π\log \left(\frac{x}{2}\right) J_n (x)-\frac1π\left(\frac{2}{x}\right)^n \sum_{k=0}^{n-1} \frac{(n-k-1) !}{k!} \left(\frac{x^2}{4}\right)^k-\frac1π\left(\frac{x}{2}\right)^n \sum_{k=0}^∞\frac{[ψ(k+1)+ψ(n+k+1)]}{k! (n+k) !} \left(-\frac{x^2}{4}\right)^k\]

\(J_n (x)\) and \(Y_n (x)\) are oscillatory functions, which have an infinite discrete set of zeros, denoted \(j_{n, m}\) and \(y_{n, m}\)

As \(x \rightarrow 0\), \(J_0 (x) \rightarrow 1, J_n (x) \rightarrow 0, Y_n (x) \rightarrow \infty\), for \(n≥1\)

The Bessel functions satisfy interesting recurrence relations

\[J_{n+1} (x)=\frac{2 n}{x} J_n (x)-J_{n-1} (x)\]

and

\[J_{n+1} (x)=-2 J_n' (x)+J_{n-1} (x)\]

1.2 Application

Consider vibrations of a circular drum skin \(w (r, θ, t)\)

Assume the drum has radius \(a\)

The wave equation \(\frac{1}{c^2} \frac{∂^2 w}{∂ t^2}=\nabla^2 w\) where \(c=\sqrt{\frac{T}{\rho}}\) is the wave speed (\(T\) is tension) We have \(w=0\) at \(r=a\). We seek ‘normal modes’ which take the form \(w (r, θ, t)=u (r, θ) \cos (ω t)\), \(ω\) are the frequencies that we hope to determine. Writing \(λ=\frac{ω^2}{c^2}\), we have \(u (r, θ)\) satisfies

\[\nabla^2 u=\frac{1}{r} \frac{∂}{∂ r} \left(r \frac{∂ u}{∂ r}\right)+\frac{1}{r^2} \frac{∂^2 u}{∂ θ^2}=-λ u, 0<r<a\]

with \(u=0\) at \(r=a\). Seeking a separable solution, we find [Fourier series]

\[u (r, θ)=U_0 (r)+\sum_{n=1}^∞U_n (r) \cos θ+V_n (r) \sin θ\]

where \(U_n (r), V_n (r)\) satisfy

\[\tag{*}\frac{1}{r} \frac{\mathrm{d}}{\mathrm{d} r} \left(r \frac{\mathrm{d} U_n}{\mathrm{d} r}\right)+\left(λ-\frac{n^2}{r^2}\right) U_n=0, 0<r<a, U_n (a)=0\]

If we write \(x=λ^{1 / 2} r\), and \(U_n (r)=y (x)\), then

\[x^2 y''+xy'+(x^2-n^2) y=0, 0<x<λ^{1 / 2} a, y (λ^{1 / 2} a)=0\]

i.e. Bessel's equation of order \(n\), with solution

\[y (x)=C_1 J_n (x)+C_2 Y_n (x)\]

\(y\) bounded\(⇒C_2=0\)

\(y (λ^{1 / 2} a)=0⇒J_n (λ^{1 / 2} a)=0⇒λ=\frac{j_{n, m}^2}{a^2}\)

So we have found eigenvalues and eigenfunctions for (*)

\[λ=\frac{j_{n, m}^2}{a^2}, U_n (r)=J_n (λ^{1 / 2} r)=J_n \left(\frac{j_{n, m}}{a} r\right)\]

This corresponds to the natural frequencies of the drum skin

\[ω=\frac{cj_{n, m}}{a}\]

The lowest frequency \(\frac{cj_{0, 1}}{a}\)

Hitting a drum will in general excite all frequencies and the vibrations will be a superposition of the normal modes, ie. an eigenfunction expansion

Notice that (*) can be written in Sturm-Liouville form by multiplying \(r\) and moving terms

\[-\frac{\mathrm{d}}{\mathrm{d} r} \left(r \frac{\mathrm{d} U_n}{\mathrm{d} r}\right)+\frac{n^2}r U_n=λ rU_n\]

The presence of the \(r\) on the RHS indicates that the eigenfunctions \(J_n \left(\frac{j_{n, m} r}{a}\right)\) satisfy orthogonality conditions

\[\int_0^a J_n \left(\frac{j_{n, m} r}{a}\right) J_n \left(\frac{j_{n, l} r}{a}\right) r \mathrm{d} r=\begin{cases} 0 & \text{for } m \neq l\\ \frac{a^2}{2} J_n' (j_{n, m})^2 & \text{for } m=l \end{cases}\]

Legendre equation

Suppose we solve Laplace's equation \(\nabla^2 u=0\) in spherical polar coordinates

\[\nabla^2 u=\frac{1}{r^2} \frac{∂^2}{∂r^2} \left(ru\right)+\frac{1}{r^2 \sinθ} \frac{∂}{∂θ} \left(\sinθ \frac{∂u}{∂θ}\right)+\frac{1}{r^2 \sin^2θ} \left(\frac{∂^2 u}{∂ϕ^2}\right)=0\]

We can seek separable solution \(u (r,θ, ϕ)=R (r) \Theta (θ) \Phi (ϕ) \Rightarrow\)

\[\frac{(r^2 R')'}{R}=-\frac{(\sin \Theta')'}{\sin \Theta}-\frac{1}{\sin^2 \Theta} \frac{\Phi''}{\Phi}=λ \text{, constant}\]

(Since LHS is independent of θ,ϕ, RHS is independent of $r$)

So \(r^2 R''+2 rR'-λ R=0\), and

\[-\frac{\Phi''}{\Phi}=λ \sin^2Θ+\sinΘ\left(\sinΘ'\right)'=\mu \text{, constant}\]

Since we need \(\Phi (ϕ)\) to be \(2 \pi\)-periodic, we in fact need \(\mu=m^2\) for a non-negative integer \(m\). [\(\Phi (ϕ) =A\cos mϕ+B \sin m ϕ\)]

For \(\Theta (θ)\) we need

\[\frac{1}{\sinθ} \frac{\mathrm{d}}{\mathrm{d}θ} \left(\sinθ \frac{\mathrm{d} \Theta}{\mathrm{d}θ}\right)+\left(λ-\frac{m^2}{\sin^2θ}\right) \Theta=0 \quad 0<θ<\pi\]

Let \(x=\cosθ, \Theta (θ)=y (x)\), so \(\frac{\mathrm{d}}{\mathrm{d}θ}=-\sinθ \frac{\mathrm{d}}{\mathrm{d} x} \Rightarrow\)

\(\displaystyle \frac{\mathrm{d}}{\mathrm{d} x} \left((1-x^2) \frac{\mathrm{d} y}{\mathrm{d} x}\right)+\left(λ-\frac{m^2}{1-x^2}\right) y=0 \quad -1<x<1\)(*)

This is called the associated Legendre equation.

This equation has regular singular points at \(x=\pm 1\). The solution is unbounded when \(λ=l (l+1)\) for some non-negative integer \(l\). The corresponding solution when \(λ=l (l+1)\) are denoted \(P_l^m (x)\) are called the associated Legendre functions.

If we write (*) as

\[ℒy=-((1-x^2) y')'+\frac{m^2}{1-x^2} y=-λy \text{, this has Sturm-Liouville form}\]

Eigenvalues \(λ_l=l (l+1)\), and eigenfunctions \(P_l^m (x)\), so \(P_l^m (x)\) satisfy orthogonality conditions

\[\int_{-1}^1 P_l^m (x) P_k^m (x) \mathrm{d} x=0 \text{ if } k≠l\]

(Q5 on problem sheet 3)

Alternatively, we can think of \(m\) as the eigenvalue, and write (*)

\[\tilde{ℒ} y=-((1-x^2) y')'+l (l+1) y=-\frac{m^2}{1-x^2} y\]

Eigenfunctions \(P_l^m (x)\) satisfy another orthogonality relation

\[\int_{-1}^1 P_l^m (x) P_l^n (x) \frac{1}{1-x^2} \mathrm{d} x=0 \text{ for } n≠m\]

So separable solution of Laplace's equation are

\[u (r,θ, ϕ)=\left(Cr^l+\frac{D}{r^{l+1}}\right) P_l^m (\cosθ) (A \cos m ϕ+B \sin m ϕ)\]

If \(m=0\), we have Legendre equation

\[(1-x^2) y''-2 xy'+l (l+1) y=0\]

The bounded solutions are the Legendre functions \(P_l (x)\), which are polynomials of degree \(l\).

To see this, seek solutions as a Frobenius series, eg.

\[y (x)=\sum_{k=0}^{\infty} a_k (x -1)^{\alpha+k}\]

Write \(x=-1+X\), so

\[(2 x-x^2) y_{x x}+(2-2 x) y_x+l (l+1) y=0\]

Then \(y (x)=a_k x^{\alpha+k}\); substitute into the equation and compare coefficients of each power of \(X\)

\[2 \alpha^2=0 \Rightarrow \alpha=0 \text{, repeated}\]

recurrence relation is

\[a_{k+1}=\frac{k (k+1)-l (l+1)}{2 (k+1)^2} a_k \text{ for } k \geqslant 0\]

So we see that \(a_{l+1}=0\), and hence \(a_k=0\) for all \(k > l\).

So

\[y (X)=\sum_{k=0}^l a_k (x+1)^k\]

In fact \(P_l (x)\) can be expressed using Rodrigues' formula

\[P_l (x)=\frac{1}{2^l l!} \frac{\mathrm{d}^l}{\mathrm{d} x^l} [(x^2 -1)^l]\]

and it can also be shown that

\[P_l^m (x)=(-1)^m (1-x^2)^{\frac{m}{2}} \left(\frac{\mathrm{d}}{\mathrm{d} x}\right)^m P_l (x)\]

Hermite polynomials

\[y''-2 xy'+λ y=0\]

This has polynomial solutions if \(λ=2 n\) for integer \(n\).

These solutions \(H_n (x)\), which satisfy orthogonality relation

\[\int_{-\infty}^{\infty} H_m (x) H_n (x) \mathrm{e}^{-x^2} \mathrm{d} x=0 \text{ for } m≠n\]

[Since, in Sturm-Liouville form \((\mathrm{e}^{-x^2} y')'=-λ\mathrm{e}^{-x^2} y\)]

Chebyshev polynomials

\[(1-x^2) y''-xy'+λ y=0\]

is like Legendre's equation except that \(-2 x\) replaced by \(-x\).

This has polynomial solutions (Chebyshev polynomials)

If \(λ=n^2\), which satisfy

\[\int_{-1}^1 y_m (x) y_n (x) \frac{1}{(1-x^2)^{\frac{1}{2}}} \mathrm{d} x=0 \text{ for } m≠n\]

Definition. (“Big O” notation) We write \(f (x)=O (g (x))\) as \(x →x_0\) if there exists constant \(A>0\) such that \(| f (x) |<A | g (x) |\) for \(x\) sufficiently close to \(x_0\). We say ‘\(f\) is of order \(g\)’ to capture the idea that \(f (x)\) and \(g (x)\) are “roughly the same size” in the limit as \(x→x_0\).

Example. \(\sin (x)=O (x)\) as \(x→0\)

\(x^2+\frac{2}{x}-e^{-x}\) is \(O (x^2)\) as \(x→∞\), it is \(O \left(\frac{1}{x}\right)\) as \(x→0\), it is \(O (e^{-x})\) as \(x→-∞\)

Definition. (“Twiddles” notation) We write \(f (x)∼g (x)\) if \(\frac{f (x)}{g (x)}→1\) as \(x→x_0\)

This notation could be read as ‘\(f\) is asymptotic to \(g\)’ as \(x→x_0\), and captures the idea of two functions being approximately equal in some limit.

Example. \(x^2+\frac{2}{x}-e^{-x}∼\frac{2}{x}\) as \(x→0\)

Definition. (“Little o” notation) We write \(f (x)=o (g (x))\) as \(x→x_0\) if \(\lim_{x→x_0} \frac{f (x)}{g (x)}=0\).

We also write \(f (x) \ll g (x)\).

Example. \(\sin (x^2)=o (x)\) as \(x→0\), \(\frac{1}{x^2}+e^{-x}=o \left(\frac{1}{x}\right)\) as \(x →∞\)

Asymptotic series and asymptotic expansions

We are most interested in what happens when a parameter is small; we typically use \(ε\) as the parameter, and consider approximations of a function \(f (ε)\) as \(ε →0\).

Example. \(\sin (ε^{1/2})≈ε^{1/2}-\frac{1}{6} ε^{3/2}+⋯\)

Definition. A set of functions \(\{\phi_k (ε)\}_{k=0, 1, 2,…}\) is an asymptotic sequence as \(ε→0\) if \(\phi_{k+1} (ε)=o (\varphi_k (ε))\) as \(ε →0\), i.e. each term in the sequence is of smaller magnitude than the previous term.

Example. \(\{1,ε,ε^2,…\}, \{1,ε^{1/2}, ε,ε^{3/2},…\}, \left\{\log \left(\frac{1}{ε}\right), \log \left(\log \left(\frac{1}{ε}\right)\right), \log \left(\log \left(\log \left(\frac{1}{ε}\right)\right)\right),…\right\}\)

Definition. A function \(f (ε)\) has an asymptotic expansion of the form \(f (ε)=\sum_{k=0}^{∞} a_k \phi_k (x)\) as \(ε→0\), if

  • the gauge functions \(\{\phi_k (ε) \}\) form an asymptotic sequence

  • if \(f (ε)-\sum_{k=0}^N a_k \phi_k (ε)=o (\phi_N (ε))\) as \(ε→0\) (ie. the partial sums are better and better approximation the more terms we include.)

Example. \(\sin (ε^{1/2}) =ε^{1/2}-\frac{1}{6}ε^{3/2}+⋯\)

In general, if \(f (ε)\) is smooth and have a Taylor series, the Taylor series provides an asymptotic expansion.

Remark. The definition of an asymptotic expansion differs from that of a convergent expansion. For a series \(f (ε)=\sum_{k=0}^{∞} a_k \phi_k (s)\) to converge to \(f (ε)\), we need the partial sums \(\sum_{k=0}^N a_k \phi_k (x)\) to tend towards \(f (ε)\) as \(N→∞\)(for fixed \(ε\)).

For the series to be asymptotic, we need partial sums to tend towards \(f (ε)\) as \(ε→0\) (for fixed \(N\)).

It is quite often the case that an asymptotic expansion is divergent:

Example. \(f (s)=\int_0^{ε} e^{-\frac{1}{s}} \mathrm{d} s=\int_0^{ε} s^2 \frac{e^{-\frac{1}{s}}}{s^2} \mathrm{d} s=\left[ s^2 e^{-\frac{1}{s}} \right]_0^{ε}-\int_0^{ε} 2 se^{-\frac{1}{s}} \mathrm{d} s=⋯ \text{(integrating by parts)}=e^{-\frac{1}{s}} [ε^2-2ε^3+⋯]∼e^{-\frac{1}{ε}} \sum_{k=0}^{∞} (-1)^k (k+1) ε^{k+2}\)

This is divergent, but asymptotic expansion

Given a choice of gauge functions \(\phi_k (ε)\), the coefficients \(a_k\) are unique, but the choice of \(\phi_k\) is not.

Example. \(\tanε= \frac{\sin ε}{\cosε}=\frac{ε-\frac{1}{6} ε^3+⋯}{1-\frac{1}{2} ε^2+⋯}=\left(ε-\frac{1}{6}ε^3+⋯\right) \left(1+\frac{1}{2}ε^2+⋯\right)=ε+\frac{1}{3} ε^3+⋯∼\sinε+ \frac{1}{2} (\sin ε)^3+⋯\)

We usually use powers of \(ε\), sometimes exponentials or logs.

The function defines the expansion, but the expansion does not uniquely define the function.

Example. \(\frac{1}{1 -ε}∼1+ε+ε^2+⋯\)

\(\frac{1}{1 -ε}+e^{-\frac{1}{ε}}∼1+ε+ ε^2+⋯\), since \(e^{-\frac{1}{ε}}=o (ε^k)\) for any \(k\) (\(e^{-\frac{1}{ε}}\) is ‘exponentially small’)

Asymptotic approximation of solutions to an algebraic equation

Suppose we wish to find solutions of an equation \(F (x,ε)=0\)

Example. \(x^2+εx-1=0\)

We can solve exactly in this case \(x=\frac{1}{2} \left(-ε±\sqrt{4+ε^2}\right)\) which we can binomially expand to give

\[x_+=1-\frac{1}{2}ε+ \frac{1}{8} ε^2+O (ε^3), x_-=-1-\frac{1}{2}ε-\frac{1}{8} ε^3+O (ε^3)\]

but could we find these approximations without being able to solve exactly?

Suppose we can expand solution as \(x∼x_0+x_1ε+ x_2 ε^2\)

Substitute into the equation\(⇒(x_0+εx_1+ε^2 x_2+⋯)^2+ε (x_0+εx_1+…)-1=0\)

\(⇒x_0^2-1+ε(2 x_0 x_1-1)+ε^2 (2 x_0 x_1+x_1^2+x_1)+⋯=0\)

\(O (ε^0)\): \(x_0^2=1 ⇒x_0=±1\)

\(O (ε^1)\): \(2 x_0 x_1+x_0=0 ⇒x_1=-\frac{1}{2}\)

\(O (ε^2)\): \(2 x_0 x_2+x_1^2+x_1=0 ⇒x_2=±\frac{1}{8}\)

\(⇒x=±1-\frac{1}{2}ε±\frac{1}{8} ε^2+⋯\) as expected.


1 Asymptotic applications to algebraic equations

Example. \(ε x^2+x-1=0\). Find approximations to the solutions as \(ε→0\).

This is an example of a singular pertubation problem.

Setting \(ε=0\) reduces the order of the problem hence the number of solutions.

Seeking a solution as a regular asymptotic expansion \(x∼x_0+ε x_1+ε^2 x_2+⋯\)

\(⇒ε(x_0+ε x_1+⋯)^2+(x_0+ε x_1+⋯)-1=0\)

\(O(ε^0):x_0-1=0⇒x_0=1\)

\(O(ε^1):x_0^2+x_1=0⇒x_1=-1\)

\(O(ε^2):2 x_0 x_1+x_2=0⇒x_2=2\)

\(⇒x∼1-ε+2 ε^2+O(ε^3)\)

But this only finds one solution.

From the graph we know the second root is at a large negative value of \(x\).

We must rescale \(x\) so that the \(ε x^2\) term is comparable to one of the other terms in a dominant balance.

• If \(x=O\left(\frac{1}{ε^{1/2}}\right)\), then \(ε x^2\) and 1 balance, but the \(x\) term is larger, so not a dominant balance.

• If \(x=O\left(\frac{1}{ε}\right)\), then \(ε x^2+x-1=0\), so the first two terms are in dominant balance.

So we rescale \(x=\frac{1}{ε}X\), and rewrite equation in terms of \(X\).

\(⇒\frac{1}{ε}X^2+\frac{1}{ε}X-1=0⇒X^2+X-ε=0\) (This is a regular pertubation problem)

Now seek an asymptotic expansion for \(X∼X_0+ε X_1+ε^2 X_2+⋯\)

\(⇒(X_0+ε X_1+ε^2 X_2+⋯)^2+X_0+ε X_1+⋯-ε=0\)

\(O(ε^0):X_0^2+X_0=0⇒X_0=-1\)(and 0, We're not interested in it because we've already found this solution, because in a large scale it is close to 0. )

\(O(ε^1):2 X_0 X_1+X_1-1=0⇒X_1=-1\)

\(O(ε^2):X_1^2+2 X_0 X_2+X_2=0⇒X_2=1\)

\(⇒X∼-1-ε+ε^2+O(ε^3)⇒x=-\frac{1}{ε}-1+ε+O(ε^2)\)

We can confirm this expansion by solving the equation exactly \(x=\frac{1}{2 ε}\left(-1+\sqrt{1+4 ε^2}\right)\) and expanding

Example. Find an approximation for the solutions to \(xe^{-x}=ε\) when \(ε\ll 1\).

This is also a singular pertubation problem (when \(ε=0\), one solution \(x=0\); when \(ε≈0\), one solution near 0, one solution quite large).

For the solution near \(x=0\), note that \(e^{-x}≈1\), so we have \(x≈ε\). So write \(x=ε X\), then \(Xe^{-ε X}=1\), ie. \(X=e^{ε X}\). Then look for asymptote expansion\[X∼X_0+ε X_1+⋯⇒X_0+ε X_1+⋯=1+ε(X_0+εX_1+⋯)+\frac{1}{2}ε^2{(X_0+⋯)^2}+⋯\]

\(O(ε^0):X_0=1\)

\(O(ε^1):X_1=X_0⇒X_1=1\)

\(O(ε^2):X_2=X_1+\frac{1}{2}X^2_0⇒X_2=\frac{3}{2}\)

\(⇒x=ε\left(1+ε+\frac{3}{2}ε^2+⋯\right)=ε+ε^2+\frac{3}{2}ε^3+⋯\)

For a solution at large \(x\), take log of the equation \(x-\log x=-\log ε=\log\frac{1}{ε}\)

We need \(x\gg 1\) in which case \(\log x\ll x\), so the dominant balance is when \(x=O\left(\log\frac{1}{ε}\right)\)

We rescale \(x=\left(\log\frac{1}{ε}\right)X⇒\left(\log\frac{1}{ε}\right)X-\left(\log\log\frac{1}{ε}+\log X\right)=\log\frac{1}{ε}\)

\(⇒X-\frac{\log\left(\log\frac{1}{ε}\right)}{\log\frac{1}{ε}}-\frac{\log X}{\log\frac{1}{ε}}=1\)

It is not obvious how to pose the asymptotic expansion for \(X\) in this case. So write a general expansion \(X∼1+ϕ_1(ε)+ϕ_2(ε)+⋯\) where \(ϕ_k(x)=o(ϕ_{k-1}(ε))\)

\(⇒1+ϕ_1+ϕ_2+⋯-\frac{\log\left(\log\frac{1}{ε}\right)}{\log\frac{1}{ε}}-\frac{\left[(ϕ_1+ϕ_2+⋯)-\frac{1}{2}(ϕ_1+ϕ_2+⋯)^2+⋯\right]}{\log\frac{1}{ε}}=1\)

[\(\log(1+u)=u-\frac{1}{2}u^2+⋯\)]

Note that \(\frac{ϕ_1}{\log\frac{1}{ε}}\ll ϕ_1\), so the next dominant terms here are

\[ϕ_1-\frac{\log\left(\log\frac{1}{ε}\right)}{\log\frac{1}{ε}}=0⇒ϕ_1=\frac{\log\left(\log\frac{1}{ε}\right)}{\log\frac{1}{ε}}\]

At next order, we see that \(ϕ_2=\frac{ϕ_1}{\log\frac{1}{ε}}=\frac{\log\left(\log\frac{1}{ε}\right)}{\left(\log\frac{1}{ε}\right)^2}\)

So \(X∼1+\frac{\log\left(\log\frac{1}{ε}\right)}{\log\frac{1}{ε}}+\frac{\log\left(\log\frac{1}{ε}\right)}{\left(\log\frac{1}{ε}\right)^2}+⋯\)

So \(x∼\log\frac{1}{ε}+\log\left(\log\frac{1}{ε}\right)+\frac{\log\left(\log\frac{1}{ε}\right)}{\log\frac{1}{ε}}+⋯\)


8 Asymptotic analysis for BVPs

8.1 Regular pretubation expansions

Example 8.30. Find approximate solution for \(0< ε\ll 1\) to \(y''=-\frac{1}{1+ε y^2},0<x< 1,y(0)=y(1)=0\).

Look for solution \(y(x ; ε)∼y_0(x)+ε y_1(x)+⋯\)

\[⇒y_0''+ε y_1''+⋯=-\frac{1}{1+ε(y_0+ε y_1+⋯)^2}∼-1+ε y_0^2+⋯\]

with boundary conditions \(y_0(0)+ε y_1(0)+⋯=0,y_0(1)+ε y_1(1)+⋯=0\)

\(O(ε^0):y_0''=-1,y_0(0)=y_0(1)=0⇒y_0(x)=\frac{1}{2}x(1-x)\)

\(O(ε^1):y''_1=y_0^2=\frac{1}{4}x^2(1-x)^2,y_1(0)=y_1(1)=0⇒y_1(x)=-\frac{1}{240}x(1-x)(2 x^4-4 x^3+x^2+x+1)\)

Example 8.32. Find approximate solutions for \(ε\ll 1\) to \(y'=y-ε y^2,x>0\) with \(y(0)=1\).

We write \(y(x ; ε)∼y_0(x)+ε y_1(x)+⋯\)

\[⇒y_0'+ε y_1'+⋯=y_0+ε y_1+⋯-ε(y_0+ε y_1+⋯)^2\]

with \(y_0(0)+ε y_1(0)+⋯=1\)

\(O(ε^0):y_0'=y_0\) with \(y_0(0)=1\)

\[⇒y_0=e^x\]

\(O(ε^1)\): \(y_1'=y_1-y_0^2=y_1-e^{2 x},y_1(0)=0⇒y_1=e^x-e^{2 x}\)

So \(y(x ; ε)∼e^x+ε(e^x-e^{2 x})+⋯\)

Note that we are assuming successive terms in the expansion are smaller than the previous ones i.e.\(ε^k y_k(x)=o(ε^{k-1}y_{k-1}(x))\). As \(x\) becomes large, this is no longer the case. Explicitly, when \(x=O\left(\log\frac{1}{ε}\right)\) we have \(e^x∼ε e^{2 x}\). The asymptotic expansion breaks down when \(x\) is this large.

In this example, we can solve exactly \(y(x ; ε)=\frac{e^x}{1+ε\left({e^x}-1\right)}\). Treating \(x\) as \(O(1)\), this has the expansion we found.

If we write \(x=\log\frac{1}{ε}+X\), it becomes \(y=\frac{1}{ε}\frac{e^X}{1+e^X-ε}\) and treating \(X\) as \(O(1)\), \(y→\frac{1}{ε}\) as \(x→∞\)

Key observation: different asymptotic expansions may be needed for different values of \(x\)

8.2 Singular pertubation problem

Example. \(ε y'+y=e^{-x}\) for \(x>0\), \(y(0)=0\)

Since \(ε\) multiplies the highest derivative, setting \(ε\) to zero will in general mean we can't satisfy all the boundary conditions. This suggests the existence of a boundary layer, where the derivative \(y'\) is large, and a different expansion is needed.

If we seek a regular expansion \(y(x ; ε)∼y_0(x)+ε y_1(x)+⋯\) we find

\(O(1):y_0=e^{-x}\)

\(O(ε):y_0'+y_1=0⇒y_1=e^{-x}\)

We are never able to satisfy \(y(0)=0\) with this expansion.

The exact solution is \(y(x ; ε)=\frac{e^{-x}}{1-ε}-\frac{e^{-x/ε}}{1-ε}\)

The expansion we found [the outer expansion] above \(y∼e^{-x}+ε e^{-x}+⋯\) does a good job when \(x\) is not close to 0. But it does not work at all near \(x=0\).

When \(x=O(ε)\), we need a different expansion [this is called the inner expansion]. Writing \(x=ε X\) and treat \(X=O(1)\). Then\[y=\frac{e^{-ε X}}{1-ε}-\frac{e^{^{-X}}}{1-ε}∼\frac{1-ε X+⋯}{1-ε}-e^{-X}(1+ε+⋯)∼1-e^{-X}+ε(1-X-e^{-X})+O(ε^2)\]This is a good approximation to the solution when \(x=O(ε)\).

Note that as \(X∼∞\) in the inner expansion, we have that the leading order inner solution tends towards 1, which is consistent with the limit of the leading order outer solution \(y=e^{-x}\) as \(x→0\).

We say the expansion match, ie.\(\lim_{x→∞}y_{\operatorname{inner},0}=\lim_{x→∞}y_{\operatorname{outer},0}\). The two expansions together [\(y_{\operatorname{inner}}\) and \(y_{\operatorname{outer}}\)] are called a matched asymptotic expansion.

We can find a composite expansion valid for all \(x\), by combining the leading order inner and outer expansions.

\[\begin{array}{rl}y_{\operatorname{comp}}(x)&=y_{\operatorname{outer},0}(x)+y_{\operatorname{inner},0}(x)-(\text{common limit})\\&=e^{-x}+1-e^{-X}-1\\&=e^{-x}-e^{-x/ε}\end{array}\]

Matched asymptotic expansions

Example. \(ε y'+y=e^{-x},x>0,y(0)=0\)

Last time \(y=\frac{e^{-x}-e^{-\frac{x}{ε}}}{1-ε}\) exact solution

Outer expansion \(y∼e^{-x}+ε e^{-x}+⋯\)

Inner expansion \(x=ε X,y∼1-e^{-X}+O(ε)\)

Could we have found these expansions without having the exact solution?

For the outer expansion, we seek expansion \(y(x,ε)∼y_0(x)+ε y_1(x)+⋯\)

\(⇒O(1):y_0(x)=e^{-x}\)

\(O(ε):y_1(x)=e^{-x}\)

For the inner expansion (needed since outer solution doesn't satisfy \(y(0)=0\)), we write \(x=ε X\).

and write \(y(x)=Y(X)\), so \(\frac{\mathrm{d}}{\mathrm{d}x}=\frac{1}{ε}\frac{\mathrm{d}}{\mathrm{d}X}\). The equation becomes \(\frac{\mathrm{d}Y}{\mathrm{d}X}+Y=e^{-ε X}=1-ε X+⋯\) which we need to solve with \(Y(0)=0\), and with matching to the outer solution as \(X→∞\).

Pose expansion \(Y(X;ε)∼Y_0(X)+ε Y_1(X)+⋯\)

\(⇒O(1):\frac{\mathrm{d}Y_0}{\mathrm{d}X}+Y_0=1\) with \(Y_0(0)=0\)

\(⇒Y_0=1+Ae^{-X}=1-e^{-X}\) which agrees with the inner expansion expected

Example. \(ε y''+y'=1\) in \(0<x<1,y(0)=y(1)=0\)

Find approximation solution for \(0<ε≪1\).

Begin by seeking an outer expansion \(y(x;ε)∼y_0(x)+ε y_1(x)+⋯\)

\(⇒O(1):y_0'=1\) with \(y_0(0)=y_0(1)=0\)

It is not possible to solve both boundary conditions.

So we expect to need a boundary layer in which \(ε y''\) must be important. Which condition should we make the outer solution satisfy?

We'll suppose there is a boundary layer near \(x=0\). Then the outer solution should satisfy \(y_0(1)=0⇒y_0(x)=x-1\)

For the inner solution, near \(x=0\), we write \(x=δ X\) for \(δ(ε)≪1\) and write \(y(x)=Y(X)\).

Then \(\frac{\mathrm{d}}{\mathrm{d}x}=\frac{1}{δ}\frac{\mathrm{d}}{\mathrm{d}X}\), so the equation becomes

\[\frac{ε}{δ^2}\frac{\mathrm{d}^2 y}{\mathrm{d}X^2}+\frac{1}{δ}\frac{\mathrm{d}Y}{\mathrm{d}X}=1\]

To achieve dominant balance by choosing \(δ=ε\).

Then \(\frac{\mathrm{d}^2 Y}{\mathrm{d}X^2}+\frac{\mathrm{d}Y}{\mathrm{d}X}=ε\), so \(Y(x,ε)∼Y_0(x)+ε Y_1(x)+⋯⇒\frac{\mathrm{d}^2 Y_0}{\mathrm{d}X^2}+\frac{\mathrm{d}Y_0}{\mathrm{d}X}=0\) we need \(Y_0(0)=0\) and \(Y_0(X)\) to match the outer solution as \(X→∞\).

\[\lim_{X→∞}Y_0(X)=\lim_{x→0}y_0(x)=-1\]

\(Y_0=A\mathrm{e}^{-X}+B=B(1-\mathrm{e}^{-X})\) [so \(Y_0(0)=0\)]\(=e^{-X}-1\) [\(B=-1\) so \(Y→-1\) as \(X→∞\)]

We can combine the outer and inner expansions to obtain the composite expansion \(y_{\text{comp}}(x)=x-1+(e^{-X}-1)+1=x-1+e^{-x/ε}\)

What if we had supposed a boundary layer near \(X=1\)?

Then outer solution satisfies \(y(0)=0\), so \(y_0(x)=x\).

For the inner solution, we write \(x=1-δ X\), with \(δ≪1\) and \(y(x)=Y(X)\). Again \(\frac{\mathrm{d}}{\mathrm{d}x}=-\frac{1}{δ}\frac{\mathrm{d}}{\mathrm{d}X}\)

So equation \(\frac{ε}{δ^2}\frac{\mathrm{d}^2 Y}{\mathrm{d}X^2}-\frac{1}{δ}\frac{\mathrm{d}Y}{\mathrm{d}X}=1\)

Choose \(δ=ε\) again to achieve dominant balance.

The leading order terms becomes \(\frac{\mathrm{d}^2 Y_0}{\mathrm{d}X^2}-\frac{\mathrm{d}Y_0}{\mathrm{d}X}=0\) with \(Y_0(0)=0\) [ie. \(y(1)=0\)] and \(\lim_{X→∞}Y_0(X)=1\) to match the outer solution [\(y_0(1)=1\)].

\(Y_0=Ae^X+B\) cannot satisfy \(\lim_{X→∞}Y_0(X)=1\), unless \(A=0\), but in that case there is no boundary layer.

So a boundary layer near \(x=1\) is not possible.

A more general example

\(ε y''+P_1(x)y'+P_0(x)y=R(x)\) for \(a<x<b\) with \(y(a)=0,y(b)=0\).

Suppose \(P_1(x)\) is smooth, and non-zero on \([a,b]\).

If boundary layer near \(x=a\), we'd write \(x=a+ε X\), \(y(x)=Y(X)\), and equation becomes \(\frac{1}{ε}\frac{\mathrm{d}^2 y}{\mathrm{d}X^2}+\frac{1}{ε}(P_1(a)+ε P_1'(a)X+⋯)\frac{\mathrm{d}Y}{\mathrm{d}X}+(P_0(a)+⋯)Y=R⇒\)leading order \(\frac{\mathrm{d}^2 Y}{\mathrm{d}X^2}+P_1(a)\frac{\mathrm{d}Y}{\mathrm{d}X}=0\). Exponentially decaying solution, needed for a boundary layer to work, require \(P_1(a)>0\).

Near \(X=b\), write \(x=b-ε X\), \(y(x)=Y(X)\), we find \(\frac{\mathrm{d}^2 Y}{\mathrm{d}X^2}-P_1(b)\frac{\mathrm{d}Y}{\mathrm{d}X}=0\). This has decaying solution as \(X→∞\), if \(P_1(b)<0\).