Sheet3 B2
Let $X$ be a Hilbert space and $T ∈ ℬ(X)$. Show that the graph $Γ(T)$ of $T$ is a closed subspace of $X × X$ and that
\[
Γ(T)^⟂=\{(-T^* x, x): x ∈ X\} .
\]
By considering the orthogonal decomposition of $(x, 0)$, prove that $I+T^* T: X → X$ is surjective.
[Recall: if $X$ is Hilbert space, $X × X$ with $⟨(x_1, x_2),(y_1, y_2)⟩_{X × X}=⟨x_1, y_1⟩_X+⟨x_2, y_2⟩_X$ is Hilbert space.]
Proof. $Γ(T)$ is closed by Closed Graph Theorem.
$$(a,b)∈Γ(T)^⟂⇔∀x,⟨(a,b),(x,T(x))⟩=⟨a,x⟩+⟨b,T(x)⟩=0\\⇔∀x,⟨a+T^*(b),x⟩=0\\⇔a=-T^*(b).$$
so $Γ(T)^⟂=\{(-T^* x, x): x ∈ X\}.$
Let the orthogonal decomposition of $(x, 0)$ be $(x, 0)=(y,Ty)+(x-y,-Ty)$.
$(x-y,-Ty)∈Γ(T)^⟂$, so $x-y=T^*Ty$, so $x∈\operatorname{Im}(I+T^* T)$, so $I+T^* T$ is surjective.
Sheet3 Q3
Let $P(x, y, z)$ be a homogeneous polynomial of degree $d$ defining a nonsingular curve $C$ in $ℂℙ^2$.
(i) Write down Euler's relation for $P, P_x, P_y, P_z$. Deduce that the Hessian determinant satisfies:
\[
z ℋ_P(x, y, z)=(d-1) \det\begin{pmatrix}
P_{xx} & P_{xy} & P_{xz} \\
P_{yx} & P_{yy} & P_{yz} \\
P_x& P_y & P_z
\end{pmatrix}
\]
Solution. Euler's relation $dP=xP_x+yP_y+zP_z$.
Both sides $∂_x$ we get $dP_x=P_x+xP_{xx}+yP_{yx}+zP_{zx}$, so
\[(d-1)P_x=xP_{xx}+yP_{yx}+zP_{zx}\]
Both sides $∂_y$ we get $dP_y=P_y+xP_{xy}+yP_{yy}+zP_{zy}$, so
\[(d-1)P_y=xP_{xy}+yP_{yy}+zP_{zy}\]
Both sides $∂_z$ we get $dP_z=P_z+xP_{xz}+yP_{yz}+zP_{zz}$, so
\[(d-1)P_z=xP_{xz}+yP_{yz}+zP_{zz}\]
By definition of Hessian and linearity of det,
\[
z ℋ_P(x, y, z)=\det\begin{pmatrix}
P_{xx} & P_{xy} & P_{xz} \\
P_{yx} & P_{yy} & P_{yz} \\
zP_{zx} &zP_{zy}&zP_{zz}
\end{pmatrix}\]
Adding $x$ times first row, $y$ times second row to third row
\[=(d-1) \det\begin{pmatrix}
P_{xx} & P_{xy} & P_{xz} \\
P_{yx} & P_{yy} & P_{yz} \\
P_x& P_y & P_z
\end{pmatrix}. ∎
\]
(ii) Deduce further that:
\[
z^2 ℋ_P(x, y, z)=(d-1)^2 \det\begin{pmatrix}
P_{xx} & P_{xy} & P_x\\
P_{yx} & P_{yy} & P_y \\
P_x& P_y & d P /(d-1)
\end{pmatrix}
\]
Solution.
Adding $x$ times first column, $y$ times second column to third column, we get the determinant. ∎
(iii) Deduce that if $P(x, y, 1)=y-g(x)$ then $[a, b, 1]$ is a point of inflection of $C$ if and only if $b=g(a)$ and $g''(a)=0$.
This shows the lectures definition of points of inflection corresponds to the usual notion of a point of inflection of the graph of a function $g(x)$ on $ℝ$ or $ℂ$.
Solution. point$∈C⇔P(a,b,1)=0⇔b=g(a)$.
By (ii),
\[
z^2 ℋ_P(x, y, z)=(d-1)^2 \det\begin{pmatrix}
-g''(x)&0&-g'(x)\\
0&0&1\\
-g'(x)&1&0
\end{pmatrix}=(d-1)^2g''(x)
\]
so $ℋ_P(a,b,1)=0⇔g''(a)=0$ (if $d=1$, $g''=0$).
Sheet3 B1 Right Inverse Of Surjective Linear Operator
Let $X$ and $Y$ be real Hilbert spaces and let $T ∈ ℬ(X, Y)$ be surjective. Show that there exists a unique bounded linear operator $R ∈ ℬ(Y, X)$ such that
\[
T R=I_Y \text{ and }\|R T x\|≤\|x\| \text{ for all } x ∈ X .
\]
%[Hint: Follow the proof 3.11 that for operators between Hilbert spaces $S X$ is closed iff $S^* Y$ is closed]
Proof. Define $Z=\ker(T)^⟂⊆X,A=T|_Z∈ℬ(Z,Y)$.
$A$ is injective since $Z∩\ker(T)=\{0\}$.
$A$ is surjective since $Tx=T(P^Zx)=A(P^Zx)∀x∈X$ where $P^Z$ is orthogonal projection onto $Z$.
$A$ is bijective, hence $∃A^{-1}∈ℬ(Y,Z)$ by Inverse Mapping Theorem.
Define $R∈ℬ(Y,X),Ry=A^{-1}y∀y∈Y$, then $TR=AA^{-1}=I_Y$.
$RT(x)=RA(P^Zx)=P^Zx$, so $‖RT(x)‖≤‖x‖∀x∈X$.
Uniqueness:
Say $\tilde{R}$ is another possible candidate.
For all $y∈Y$,
\begin{align*}‖P^Z\tilde{R}y‖&≥‖\tilde{R}TP^Z\tilde{R}y‖
&\text{since }&\|x\|≥\|\tilde{R}T x\|\text{ for all } x ∈ X
\\&=‖\tilde{R}T\tilde{R}y‖&\text{since }&TP^Z=T
\\&=‖\tilde{R}y‖&\text{since }&T\tilde{R}=I_Y&\end{align*}
but $‖\tilde{R}y‖≥‖P^Z\tilde{R}y‖$ by Pythagoras, so $\tilde{R}y∈Z$, so
$$ARy=y=T\tilde{R}y=A\tilde{R}y$$
but $A$ is bijective, so $Ry=\tilde{R}y$.
The Riemann–Lebesgue Lemma
Fourier Series, Fourier Transform and Their Applications to Mathematical Physics pp 33–35
Theorem 6.1 (Riemann–Lebesgue lemma). If f is periodic with period $2π$ and belongs to $L^1(-π ,π )$, then
\begin{align*}\tag{6.1}\lim _{n→ ∞ }∫_{-π }^π f(x+z)\mathrm{e}^{-\mathrm{i}nz}\mathrm{d}z=0\label{Equ1}\end{align*}
uniformly in $x∈ ℝ$. In particular, $c_n(f)→ 0$ as $n→ ∞$.
Proof.
Since f is periodic with period $2π$, it follows that
\begin{align*}\tag{6.2}\label{Equ2}∫_{-π }^π f(x+z)\mathrm{e}^{-\mathrm{i}nz}\mathrm{d}z=∫_{-π +x}^{π +x} f(y)\mathrm{e}^{-\mathrm{i}n(y-x)}\mathrm{d}y=\mathrm{e}^{\mathrm{i}nx}∫_{-π }^{π } f(y)\mathrm{e}^{-\mathrm{i}ny}\mathrm{d}y \end{align*}
by Lemma 1.3.
Formula ($\ref{Equ2}$) shows that to prove ($\ref{Equ1}$) it is enough to show that the Fourier coefficients $c_n(f)$ tend to zero as $n→ ∞$. Indeed,
\[2π c_n(f)=∫_{-π }^π f(y)\mathrm{e}^{-\mathrm{i}ny}\mathrm{d}y=∫_{-π +π /n}^{π +π /n} f(y)\mathrm{e}^{-\mathrm{i}ny}\mathrm{d}y=∫_{-π }^π f(t+π /n)\mathrm{e}^{-\mathrm{i}nt}\mathrm{e}^{\mathrm{i}π }\mathrm{d}t\]
by Lemma 1.3. Hence
\begin{align*}\label{Equ3}\tag{6.3} -4π c_n(f)=∫_{-π }^π \left( f(t+π /n)-f(t)\right) \mathrm{e}^{-\mathrm{i}nt}\mathrm{d}t. \end{align*}
If f is continuous on the interval $[-π ,π ]$, then
\[\sup _{t∈ [-π ,π ]}\left| f(t+π /n)-f(t)\right| → 0, n→ ∞ .\]
Hence $c_n(f)→ 0$ as $n→ ∞$. If f is an arbitrary $L^1$ function, we let $ε >0$. Then we can define a continuous function g (see Corollary 5.3) such that
\[∫_{-π }^π \left| f(x)-g(x)\right| \mathrm{d}x<ε .\]
Write
\[c_n(f)=c_n(g)+c_n(f-g).\]
The first term tends to zero as $n→ ∞$, since g is continuous, whereas the second term is less than $ε /(2π )$. This implies that
\[\lim _{n→ ∞ }|c_n(f)|=0.\]
This fact together with ($\ref{Equ2}$) gives ($\ref{Equ1}$). The theorem is thus proved. $□$
Corollary 6.2.
Let f be as in Theorem $\ref{Equ1}$. If a periodic function g is continuous on $[-π ,π ]$, then
\[\lim _{n→ ∞ }∫_{-π }^π f(x+z)g(z)\mathrm{e}^{-\mathrm{i}nz}\mathrm{d}z=0\]
and
\begin{align*} \lim _{n→ ∞ }∫_{-π }^π f(x+z)g(z)\sin (nz)\mathrm{d}z=\lim _{n→ ∞ }∫_{-π }^π f(x+z)g(z)\cos (nz)\mathrm{d}z=0 \end{align*}
uniformly in $x∈ [-π ,π ]$.
Exercise 6.1.
Prove this corollary.
Exercise 6.2.
Show that if f satisfies the Hölder condition with exponent $α ∈ (0,1]$, then $c_n(f)=O(|n|^{-α })$ as $n→ ∞$.
Exercise 6.3.
Suppose that f satisfies the Hölder condition with exponent $α >1$. Prove that $f≡ \text {constant}$.
Exercise 6.4.
Let $f(x)=|x|^α$, where $-π ≤ x≤ π$ and $0<α <1$. Prove that $c_n(f)≍ |n|^{-1-α }$ as $n→ ∞$.
Remark 6.3.
The notation $a≍ b$ means that there exist $0modulus of continuity of f by
\begin{align*} ω _{p,δ }(f):=\sup _{|h|≤ δ } \left( ∫_{-π }^π \left| f(x+h)-f(x)\right| ^p\mathrm{d}x\right) ^{1/p}. \end{align*}
The equality ($\ref{Equ3}$) leads to
\begin{align*} |c_n(f)|&≤ \frac{1}{4π }∫_{-π }^π \left| f(x+π /n)-f(x)\right| \mathrm{d}x\\&≤ \frac{(2π )^{1-1/p}}{4π }\left( ∫_{-π }^π \left| f(x+π /n)-f(x)\right| ^p\mathrm{d}x\right) ^{1/p}≤ \frac{1}{2}(2π )^{-1/p}ω _{p,π /n}(f), \end{align*}
where we have used Hölder’s inequality in the penultimate step.
Exercise 6.5.
Suppose that $ω _{p,δ }(f)≤ Cδ ^α$ for some $C>0$ and $α >1$. Prove that f is constant almost everywhere.
Hint. First show that $ω _{p, 2δ }(f)≤ 2ω _{p,δ }(f)$; then iterate this to obtain a contradiction.
Suppose that $f∈ L^1(-π ,π )$ but f is not necessarily periodic. We can consider the Fourier series corresponding to f, i.e.,
\[f(x)∼ \sum _{n=-∞ }^∞ c_n \mathrm{e}^{\mathrm{i}nx},\]where the $c_n$ are the Fourier coefficients $c_n(f)$. The series on the right-hand side is considered formally in the sense that we know nothing about its convergence. However, the limit
\begin{align*}\label{Equ4}\tag{6.4}\lim _{N→ ∞ }∫_{-π }^π \sum _{|n|≤ N} c_n(f) \mathrm{e}^{\mathrm{i}nx}\mathrm{d}x=∫_{-π }^π f(x)\mathrm{d}x \end{align*}
exists. Indeed,
\begin{align*} ∫_{-π }^π \sum _{|n|≤ N} c_n(f) \mathrm{e}^{\mathrm{i}nx}\mathrm{d}x&=c_0(f)∫_{-π }^π \mathrm{d}x +\sum _{0<|n|≤ N}c_n(f)∫_{-π }^π \mathrm{e}^{\mathrm{i}nx}\mathrm{d}x=2π c_0(f)\\&=∫_{-π }^π f(x)\mathrm{d}x. \end{align*}
Remark 6.4
(Important properties of the Fourier series). The existence of the limit ($\ref{Equ4}$) shows us that we can always integrate the Fourier series of an $L^1$ function term by term.
$p$ is a point of inflection if $I_p(C, L) ⩾ 3$
Proposition 3.15. A nonsingular points $p$ in a curve $C$ in $ℂ ℙ^2$ is a point of inflection if $I_p(C, L) ⩾ 3$, where $L$ is the tangent line to $C$ at $p$.
Proof. After a projective transformation can take $P=[0,0,1]$ and $L=\{x=0\}$. Then
\[
P(0,0,1)=P_y(0,0,1)=P_z(0,0,1)=0 \text {, and } P_x(0,0,1) ≠ 0
\]
as $P$ is nonsingular. So by Lemma 3.14
\[
H_p(0,0,1)=(d-1)^2 \det\left(\begin{array}{ccc}
P_{x x}(0,0,1) & P_{x y}(0,0,1) & P_x(0,0,1) \\
P_{y x}(0,0,1) & P_{y y}(0,0,1) & 0 \\
P_x(0,0,1) & 0 & 0
\end{array}\right)
=-(d-1)^2P_x(0,0,1)^2 P_{y y}(0,0,1) .
\]
Thus $p$ is a point of inflection if $P_{y y}(0,0,1)=0$.
But $I_P(C, L)$ is the largest power of $(y-0 z)$ dividing $R_{P(x, y, z), x}=P(0, y, z)$
As $P(0,0,1)=P_y(0,0,1)=0$, $y^2$ divide $P(0, y, z)$, and $y^3$ divides $P(0, y, z)$ if $P_{yy}(0,0,1)=0$.
Here $I_p(C, L) ⩾ 2$, and $I_p(C, L) ⩾ 3$ if $p$ is a point of inflection. ∎
Sheet 2 B5
Derive the open mapping theorem from the inverse mapping theorem in the case $X$ is a Hilbert space.
Proof.
$X$ is a Hilbert space, $Y$ is a Banach space, $T\colon X→Y$ is a surjective linear map.
By Theorem 3.11, $TX$ closed implies $T^*Y$ closed.
Let $Q\colon X→T^*Y$ be the orthogonal projection.
$Q$ is open: $B^{T^*Y}=Q(B^{T^*Y})⊆Q(B^X)$.
Let $S≔T|_{T^*Y}$.
$S\colon T^*Y→Y$ is injective: if $x∈\ker(S)$ then $x∈\ker(T)∩T^*Y=\{0\}$.
By the inverse mapping theorem $S$ has a continuous inverse, so $S$ is open.
Since $\ker(T)^⟂=T^*Y$, for all $x∈X,T(x-Qx)=0⇒Tx=TQx$, so $T=S∘Q$ is open.
% https://math.stackexchange.com/questions/4330870/inverse-mapping-theorem-implies-open-mapping-theorem
[In general: Quotient map is surjective so open. Projection map is surjective so open.]
$S_p$ is generated by any p cycle and a transposition
Proof: By the primality of $p$ and the Sylow theorems, we have that the $p$-Sylow subgroups are all generated by elements of order $p$ that are conjugate to each other.
Therefore, we can take without loss of generality our element of order $p$ to be the cycle $(234\dots p1)$. Let our transposition be $(ij)$. We can conjugate by our cycle to generate any transposition of $k$ and $k + j - i$. Then we can compose these transpositions of size $j - i$ starting at $1$, using primality at $p$ to get a transposition between $1$ and $h$ for $h < j - i$. This continues until we get the transposition $(12)$. Conjugating by our cycle repeatedly gives us all of the transpositions between $i$ and $i+1$, which then generate $S_p$.
A nonsingular point p ∈ C is an inflection point if and only if det Pij = 0.
Proposition 16 A nonsingular point $p \in C$ is an inflection point if and only if $\det P_{i j}=0$.
Proof: To find the multiplicity of $p$ with respect to a line we have to take the resultant of $P(x, y, z)$ and $a x+b y+c z$. But this is just substituting $x=-(b y+c z) / a$ into $P$. It is more convenient to retain the symmetry and consider the line as the set of points
\[
\left[a_0+t \alpha_0, a_1+t \alpha_1, a_2+t \alpha_2\right]
\]
as $t$ varies. Then $I_p(C, L) \geq 3$ if and only if $t^3$ divides
\[
P\left(a_0+t \alpha_0, a_1+t \alpha_1, a_2+t \alpha_2\right) .
\]
Expanding this, we have
\[\tag7
P(a+t \alpha)=P(a)+t \sum_i P_i(a) \alpha_i+\frac{t^2}{2} \sum_{i, j} P_{i j}(a) \alpha_i \alpha_j+t^3 R
\]
and $t^3$ divides this if and only if
\[
\sum_i P_i(a) \alpha_i=0=\sum_{i, j} P_{i j}(a) \alpha_i \alpha_j
\]
The first equation says that the line is a tangent.
We now use homogeneity - the $P_i$ are homogeneous of degree $n-1$ - so Euler's relation gives
\[
(n-1) P_i(a)=\sum_j P_{i j}(a) a_j
\]
This always gives
\[\tag8
\sum_{i, j} P_{i j}(a) a_i a_j=(n-1) \sum_i P_i(a) a_i=n(n-1) P(a)=0
\]
and in our case also
\[
\sum_{i, j} P_{i j}(a) a_i \alpha_j=(n-1) \sum_i P_i(a) \alpha_i=0
\]
from the first equation. Together with the second equation we see that the quadratic form on the vector space spanned by $a$ and $\alpha$ (the subspace of $\mathbf{C}^3$ defining the line $L$) vanishes completely. This means that the matrix of the quadratic form with respect to a basis $a, \alpha, \beta$ is of the form
\[
\left(\begin{array}{lll}
0 & 0 & * \\
0 & 0 & * \\
* & * & *
\end{array}\right)
\]
and so $\det P_{i j}(a)=0$. [A small lemma: a bilinear form vanish on a line then it vanish.]
Conversely take a basis of $a, \alpha, \beta$ where $a$ and $\alpha$ define the tangent at $p$. From (8) the matrix is of the form
\[
\left(\begin{array}{lll}
0 & 0 & * \\
0 & * & * \\
* & * & *
\end{array}\right)
\]
If $\sum_{i j} P_{i j} a_i \beta_j=0$ then $\sum_i P_i \beta_i=0$ by homogeneity, but then $\left[\beta_0, \beta_1, \beta_2\right]$ lies on the tangent so $a, \alpha, \beta$ do not form a basis. Hence the determinant $\det P_{i j}$ vanishes if and only if the central term
\[
\sum_{i, j} P_{i j}(a) \alpha_i \alpha_j=0
\]
Sheet 2 B4
Let $X$ and $Y$ be real Banach spaces and $T ∈ ℬ(X, Y)$. Assume that $Z=T X$ is a finite-codimensional subspace of $Y$ and let $\{y_1+Z, …, y_m+Z\}$ be a basis for $Y / Z$. Define $\hat{T}: X ⊕ ℝ^m → Y$ by
\[
\hat{T}(x,(v_1, …, v_m))=T(x)+\sum_{j=1}^m v_j y_j .
\]
Show that $\hat{T}$ is a surjective bounded linear operator.
Hence, by applying the open mapping theorem, deduce that $Z$ is closed.
Proof: Clearly $\hat{T}$ is linear.
\[
‖\hat{T}(x,(v_1, …, v_m))‖≤‖T(x)‖+‖\sum_{j=1}^m v_j y_j‖
≤‖T‖‖x‖+\max_j‖y_j‖\sum_{j=1}^m‖v_j‖
\]
so $\hat{T}$ is bounded.
For any $y∈Y$, let $y+Z=\sum_{j=1}^m v_j(y_j+Z)$, then $y-\sum_{j=1}^m v_jy_j∈Z=TX$, so $y-\sum_{j=1}^m v_jy_j=T(x)$ for some $x$, so $y=\hat{T}(x,(v_1, …, v_m))$, so $T$ is surjective.
By open mapping theorem, $\hat{T}$ is open, so $\hat{T}(X,ℝ^m∖\{0\})=Y∖Z$ is open, so $Z$ is closed.
323 post articles, 36 pages.