adjoint of a closed-range operator is closed-range
Theorem 3.11.
MSE
we prove that if $W := T^*Y$ is closed in $X$ then also $TX$ is closed in $Y$, i.e. $T X = \overline{T X}$.
For this we set $Z = T X$ and need to show that $Z ⊂ T X$ as the reverse inclusion is trivially satisfied. We will prove this by showing that the identity map $I_Z$ can be obtained as a composition of the map $T$ with a suitable map from $Z$ to $W ⊂ X$ which we construct as follows:
As $T$ maps $X$ into $Z$ we can view it instead as a map into the Banach space $Z$ and we call the resulting map $S$, i.e. $S ∈ B(X, Z)$ is simply given by $Sx=Tx$ for all $x ∈ X$.
The adjoint $S^*$ of $S$ is an operator from $Z$ to $X$.
$Z = \overline{\operatorname{Im}S} = (\operatorname{Ker}S^*)^⊥$, so
$\operatorname{Ker}S^*=\{0\}$, i.e. $S^*$ is injective.
$$X\overset{S}\twoheadrightarrow Z \hookrightarrow Y\\
Y\xrightarrow{T^*}W\hookrightarrow X\\Y\overset{P}\twoheadrightarrow Z\overset{V}\simeq W\hookrightarrow X$$
We claim that $\operatorname{Im}S^* = W$. To this end we let $P$ be the orthogonal projection from $Y$ onto $Z$ and compute, for $x ∈ X$ and $y ∈ Y$,
$\langle T x, y\rangle_Y=\langle S x, P y\rangle_Y=\left\langle x, S^* P y\right\rangle_X$.
This shows that $T^*=S^* \circ P$, and so $\operatorname{Im} S^*=W$, as claimed.
So, $S^*$ can be regarded as a bounded bijective linear operator between $Z$ and $W$. To make the notation clearer, we rename it as $V \in \mathscr{B}(Z, W), V z=S^* z$ for all $z \in Z$. As both $Z$ and $W$ are Banach we can apply the inverse mapping theorem to deduce that $V$ is invertible, which in turn implies that also $V^*$ is invertible.
We finally claim that
\[
I_Z=T \circ\left(V^*\right)^{-1},
\]
which then immediately implies that $Z \subset T X$ and hence that indeed $T X=Z=\overline{T X}$.
To show this, i.e. that $z=T\left(\left(V^*\right)^{-1} z\right)$ for any given $z \in Z$, we write for short $w=\left(V^*\right)^{-1} z$, note that both $z$ and $T w$ are elements of $Z$ and that for any $y \in Z$ :
\[
\langle T w, y\rangle_Y=\langle S w, y\rangle_Y=\left\langle w, S^* y\right\rangle_X=\langle w, V y\rangle_X=\left\langle V^* w, y\right\rangle_Y=\langle z, y\rangle_Y .
\]
Since this holds for all $y \in Z$, so in particular for $y=T w-z$, we deduce that $T w=z$ and so $T \circ\left(V^*\right)^{-1}=I_Z$ as desired.
K is a compact operator, then I + K is Fredholm
Lemma 16.25. Let $K : X → X$ be a compact operator. Then $I + K$ is Fredholm.
Proof: First we consider the kernel of $I+K$.
For all $x\in\ker(I+K)$, $Kx=-x$.
So $B=K(B)$ where $B$ is the closed unit ball in $\ker(I+K)$.
So $B$ is image of a bounded set under a compact operator, hence is precompact. But $B$ is closed so $B$ is compact. By Riesz's lemma $\ker(I+K)$ is finite dimensional.
Next we show that $\operatorname{Ran}(I+K)$ is closed. By lemma 16.17 it suffices to show that if $x_i$ is a bounded sequence so that $x_i+K_i x_i$ converges to $y \in Y$ then there is $x \in X$ so that $x+K x=y$. Since $\{x_i\}$ is bounded there is a subsequence $x_{i_j}$ so that $\{K x_{i_j}\}$ converges. But then $\{x_{i_j}\}$ converges. Thus the operator $I+K$ is a semi-Fredholm. Applying the same arguement to the adjoint $I+K^*$ completes the proof.
Uniformly Convex Spaces
THEOREM 1. The uniformly convex product of a finite number of uniformly convex Banach spaces is uniformly convex. $\dagger$
Sheet4 A1
Let $X=C([0,2], ℂ)$ with the sup norm, and let
\[
g(t)= \begin{cases}t & \text { if } 0 ⩽ t ⩽ 1 \\ 1 & \text { if } 1Solution. $‖Tf‖=\sup|g||f|≤\sup|g|\sup|f|=\sup|f|=‖f‖⇒‖T‖≤1$.
$‖T1‖=1=‖1‖⇒‖T‖=1$.
[$⇒r_σ(T)≤‖T‖=1.$]
Now $σ_p(T)=?$
$Tf=λf$ i.e. $(g-λ)f=0$
i.e. $(g(t)-λ)⋅f(t)=0∀t$
If there is no $t$ such that $g(t)=λ$, then $f≡0$ and $λ$ isn't an eigenvalue.
If $∃t_0$ such that $g(t_0)=λ⇒λ∈[0,1]$
Case 1. $λ<1$, $t_0$ is unique
$⇒f(t)=0∀t≠t_0$
$f$ is continuous$⇒f≡0$
$λ∉σ_p(T)$.
Case 2. $λ=1$, $g(t)=1$ if $t≥1$; $g(t)≠1$ if $t<1$.
$⇒Tf=λf$ iff $f=0$ in $[0,1]$
e.g. $f(x)=(x-1)χ_[1,2]$.
∴$σ_p(T)=\{1\}$.
Now $σ_{ap}(T)=?$
$λ∈σ_{ap}(T)$ if $∃‖f_n‖=1,(λI-T)f_n→0$
i.e. $\sup|f_n|=1,\sup|λ-g||f_n|→0$
If $λ∉$Range of $g=[0,1]$.
$\min|λ-g|>a>0$
$a\sup|f_n|≤\sup|λ-g||f_n|→0$, contradicting $‖f_n‖=1$.
$⇒λ∉σ_{ap}(T)$.
If $0≤λ<1$
Pick an interval $J_n$ of size $\frac1n$ close to $t_0$
$‖f_n‖≤1$ in $t_0$
$f_n=0$ outside $J_n$, $f_n(t_0)=1$
$⇒λ∈σ_{ap}(T)$
$σ_{ap}(T)=[0,1]$
$σ(T)⊆\overline{D(0,1)}$
$σ(T)$ closed
$σ(T)⊇σ_{ap}(T)=[0,1]$
Claim. if $λ∉[0,1]$, then $λ∈ρ(T)$.
$(λI-T)f=h$
$(λ-g)f=h$
$f=\frac{h}{λ-g}$ continuous as $λ-g$ is continuous and nowhere 0
$⇒σ(T)=[0,1]$.
$σ_r(T)=[0,1),σ_c(T)=∅$.
The Coresidual Of Four Points On A Cubic
Consider a cubic curve in the projective plane, and fix four points \(D\), \(E\), \(F\) and \(G\) on the curve. These four points define a pencil of conics, and Bezout's theorem tells us that each conic intersects the cubic in six points (counting multiplicity). Given a conic in this pencil, let \(X\) and \(Y\) denote the remaining two points of intersection. A special case of Sylvester's theory of coresiduation says that there is a unique point \(K\) on the cubic (determined by \(D,E,F,G\)) such that \(X\), \(Y\) and \(K\) are all collinear. The point \(K\) is called the coresidual of the four points \(D,E,F,G\) with respect to the cubic curve.
This is illustrated in the diagram below. The curve is a cubic through the nine points \(A,B,C,D,E,F,G,H,I\) and the orange curve is a conic through \(D,E,F,G\) as well as a fifth point \(X\). You can see that the line \(KX\) intersects the cubic at the remaining intersection point of the conic and the cubic.
You can click and drag the point \(X\) to move it along the cubic, thus changing the choice of conic in the pencil through \(D,E,F,G\). Note that the point \(K\) does not move, since it uniquely depends on the points \(D, E, F, G\). You can also move the points \(A,B,C,D,E,F,G,H,I\) to change the cubic curve (there may be some lag while the applet recalculates the cubic curve through these nine points). The applet will automatically recalculate the position of the coresidual \(K\).
The original proof of the existence of the coresidual was done in terms of the equations defining the cubic and conic. There are a number of ways to prove this using modern methods, one of which is given below. For convenience we will abuse the notation slightly; \(H\) is used to denote a hyperplane divisor, rather than points on the cubic as in the diagram above.
Let \(\Sigma\) denote the cubic curve. A conic through the points \(D,E,F,G\) defines a meromorphic function on \(\Sigma\), denoted \(f_X \in \mathcal{M}(\Sigma)\), which has zeros at the points of intersection of the two curves \(D,E,F,G,X,Y\) and a total of six poles (counting multiplicity) at the points at infinity. Given a different conic in the pencil which intersects \(\Sigma\) at \(D,E,F,G,X',Y'\), we have a different meromorphic function \(f_{X'}\), for which the order of the poles at the points at infinity is the same as that for \(f_X\). Therefore the function \(g_{X,X'} = \frac{f_X}{f_{X'}}\) is a globally defined meromorphic function on \(\Sigma\) with zeros at \(X,Y\) and poles at \(X',Y'\). Another way to say this is that the effective divisors \(X+Y\) and \(X'+Y'\) are linearly equivalent, or that every conic in the pencil defines a divisor \(X'+Y'\) in the degree two linear system |X+Y|.
We now claim that the pencil of conics defines a complete linear system. The pencil is parametrised by \(\mathbb{P}^1\), and a calculation using Riemann-Roch shows that \(h^0(\mathcal{O}_\Sigma[X+Y]) = 2\). Therefore \(|X+Y| \cong \mathbb{P}^1\) and so the linear system in \(|X+Y|\) defined by the pencil of conics is in fact all of \(|X+Y|\).
Let \(H\) denote a hyperplane divisor for the embedding \(\Sigma \subset \mathbb{P}^2\). Since a cubic curve is isomorphic to its Jacobian, then there exists a unique point \(K \in \Sigma\) such that \(\mathcal{O}_\Sigma[X+Y+K] \cong \mathcal{O}_\Sigma[H]\). Therefore, for any \(X'+Y' \in |X+Y|\), the divisor \(X'+Y'+K\) is a hyperplane divisor. Equivalently, the points \(X',Y',K\) are collinear, which is exactly what we wanted to show.
Sheet4 B4
A linear operator $T: ℓ^1 → ℓ^1$ is defined by
\begin{align*}
T(x_1, x_2, x_3, …)= & (y_1, y_2, y_3, …), \\
& \text { where } y_k=\frac{k+1}{k}x_{k+1} \text { for } k ⩾ 1 .
\end{align*}
(a) Show that $T$ is bounded and that $‖T‖=2$. Obtain an explicit formula for $T^2 x$ and, more generally, for $T^n x$ when $n$ is a positive integer and $x=(x_1, x_2, x_3, …)∈ℓ^1$. Calculate $\|T^n\|$.
Solution. ${‖Tx‖}_1=\sum_{k=1}^∞{|y_k|}=\sum_{k=1}^∞\frac{k+1}k{|x_{k+1}|}≤2\sum_{k=1}^∞{|x_{k+1}|}≤2\sum_{k=1}^∞{|x_k|}=2{‖x‖}_1$
equality holds when ${|x_2|}=1;x_k=0,∀k≠2$. So ${‖T‖}=2$.
Obtain a formula $(T^nx)_k=\frac{k+n}kx_{k+n}$.
\begin{align*}‖T^nx‖_1&=\sum_{k=1}^∞\frac{k+n}k|x_{k+n}|\\&≤(n+1)\sum_{k=1}^∞|x_{k+n}|\\&≤(n+1)\sum_{k=1}^∞|x_k|=(n+1)‖x‖_1\end{align*}
equality holds when $|x_{n+1}|=1;x_k=0,∀k≠n+1$. So ${‖T^n‖}=n+1$.
(b) Which complex numbers $λ$ are eigenvalues of $T$?
Solution.
$λ∈σ_p⇒∃x∈ℓ^1:x_1=1,Tx=λx⇒λx_k=\frac{k+1}kx_{k+1}⇒x_k=\frac{λ^{k-1}}k⇒‖x‖_1=\sum_{k=1}^∞\frac{|λ|^{k-1}}k⇒|λ|<1$
For $|λ|<1$, we have $\sum_{k=1}^∞\frac{|λ|^k}k=-\log(1-|λ|)$, so $x∈ℓ^1$, so $λ∈σ_p$.
(c) Prove that the spectrum of $T$ is the disc $\{λ ∈ ℂ∣|λ|⩽1\}$.
Solution. $\lim_{n→∞}‖T^n‖^{1\over n}=\lim_{n→∞}(n+1)^{1\over n}=1$, by Gelfand's formula $σ(T)⊆\overline{\rm D}(0,1)$.
$σ(T)⊇{\rm D}(0,1)$ by (b), but $σ(T)$ is closed, so $σ(T)⊇\overline{\rm D}(0,1)$, so $σ(T)=\overline{\rm D}(0,1)$.
The question doesn't ask us to find $σ_r,σ_c$.
MSE
Sheet4 B3
% https://courses.maths.ox.ac.uk/pluginfile.php/79672/mod_resource/content/0/Sheet4SolGuide.pdf Q6
Let $X$ be a complex Hilbert space and $S$ and $T$ be two self-adjoint bounded linear operators on $X$.
(a) Let $λ ∉ σ(T)$. Use the fact that $σ\left((T-λ I)^{-1}\right)=(σ(T)-λ)^{-1}$ (a form of spectral mapping theorem) and Gelfand's formula to show that
\[
\left\|(T-λ I)^{-1}\right\|=\frac{1}{\operatorname{dist}(λ, σ(T))} .
\]
Deduce that $I+(T-λ I)^{-1}(S-T)$ is invertible if
\[
\|S-T\|<\operatorname{dist}(λ, σ(T))
\]
Hence, show under this latter assumption that $λ ∉ σ(S)$.
Solution.
Since $(T-λI)^{-1}$ is normal, we have that
\begin{align*}
\left\|(T-λ I)^{-1}\right\| & =r_σ\left((T-λ I)^{-1}\right)&\text{by Q2}\\&=\sup_{ζ ∈ σ\left((T-λ I)^{-1}\right)}|ζ| \\
& =\sup_{ζ ∈(σ(T)-λ)^{-1}}|ζ|&\text{spectral mapping theorem}\\&=\left(\inf _{ζ ∈ σ(T)-λ}|ζ|\right)^{-1} \\
& =\frac{1}{\operatorname{dist}(λ, σ(T))} .
\end{align*}
Hence, if $\|S-T\|<\operatorname{dist}(λ, σ(T))$, then $K:=(T-λ I)^{-1}(S-T)$ satisfies $\|K\|<1$ and so $I+K$ is invertible. This implies that $(T-λ I)(I+K)=S-λ I$ is invertible and so $λ ∉ σ(S)$.
(b) Use (a) to show that
\[
\|S-T\| ⩾ \operatorname{dist}_H(σ(S), σ(T))
\]
where the Hausdorff distance $\operatorname{dist}_H(A, B)$ between two closed subsets $A$ and $B$ of $ℂ$ is defined by
\[
\operatorname{dist}_H(A, B)=\max \left(\sup_{a ∈ A} \min_{b ∈ B}|a-b|, \sup_{b ∈ B} \min_{a ∈ A}|a-b|\right)
\]
Solution. Suppose by contradiction that the conclusion fails. We assume without loss of generality that
\[
\|S-T\|<\sup_{a ∈ σ(S)} \min_{b ∈ σ(T)}|a-b|=\sup_{a ∈ σ(S)} \operatorname{dist}(a, σ(T)) .
\]
Then, there exists $λ ∈ σ(S)$ such that
\[
\|S-T\|<\operatorname{dist}(λ, σ(T)) .
\]
This implies that $\operatorname{dist}(λ, σ(T))>0$ and so $λ ∉ σ(T)$. By (a), this implies that $λ ∉ σ(S)$, contradiction.
323 post articles, 36 pages.