Sheet1 B3 Orthogonal Projection
$\def\ker{\operatorname{Ker}}\DeclareMathOperator{\im}{Im}$Let $X$ be a real Hilbert space and assume that $A ∈ ℬ(X)$ is a projection, i.e. so that $A^2=A$. Show that $\im A=\ker(I-A)$ and prove the following are equivalent
(1) $A=A^*$
(2) $(\im A)^⟂=\ker A$
(3) $‖A‖≤1$.
Deduce that either $‖A‖=1$ or $A=0$ provided that one of the above statements is true.
Solution.
$∀x∈\im A,x=Ay$ for some $y∈X$, so $(I-A)Ay=0$, so $\im A⊂\ker(I-A)$.
In the other direction, $∀x∈\ker(I-A),x=Ax∈\im A$, so $\im A⊃\ker(I-A)$.
So $\im A=\ker(I-A)$. In particular, $\ker(I-A)$ is closed implies $\im A$ is closed.
Fact: $X=\im A+\im(I-A)$ [since $x=Ax+(I-A)x$], we know both $\im A,\im(I-A)$ are closed.
(1⇒2) Suppose $A=A^*$. Using 1.5 $∀T∈ℬ(X),\ker T=(\im T^*)^⟂$ we have $\ker A=(\im A^*)^⟂=(\im A)^⟂$.
(2⇒1) Suppose $(\im A)^⟂=\ker A$, then $∀x,y∈X$, since $(I-A)x∈\ker A,$
\[⟨A^∗(I-A)x,y⟩=⟨(I-A)x,Ay⟩=0⇒A^∗(I-A)=0⇒A^∗=A^∗ A\]
Taking adjoint on both sides, we also have $A=A^∗ A$. Therefore $A^∗=A$.
(2⇒3) Suppose $(\im A)^⟂=\ker A$. We showed $\im A$ is a closed subspace, by projection theorem $X=\im A⊕\ker A$. $∀x∈X$ write $x=y+z$ for some $y∈\im A,z∈\ker A$ and $‖x‖^2=‖y‖^2+‖z‖^2$. Since $Az=0$ we have $Ax=Ay=y$.
\[‖Ax‖=‖y‖≤‖x‖⇒‖A‖≤1\]
(3⇒1) Suppose $‖A‖≤1$, then $∀x,y∈X$, since $A(I-A)y=0,$
\[‖Ax‖=‖A(A x-(I-A)y)‖≤‖A x-(I-A)y‖\]
so\[\operatorname{dist}(A x, \im(I-A))=\inf_y‖A x-(I-A)y‖=‖A x‖\]
By 1.2.15 $⟨A x,(I-A)y⟩=0⇒A^∗(I-A)=0⇒A^∗=A^∗ A$. Taking adjoint on both sides, we also have $A=A^∗ A$. Therefore $A^∗=A$.
Finally, by 2.2.2 $‖A^2‖≤‖A‖^2$, so $‖A‖≤‖A‖^2$, so either $‖A‖≥1$ or $A=0$.
Provided that (3) is true, $‖A‖≤1$, so either $‖A‖=1$ or $A=0$.
Sheet 1 Q5
Let $f\colon ℝ → ℂ$ be a measurable function satisfying $|f(x)|≤𝖾^{-|x|}$ for almost all $x ∈ ℝ$. Prove that the Fourier transform $\hat{f}$ cannot have compact support unless $f(x)=0$ for almost all $x ∈ ℝ$.
By Differentiation Rule
\[\hat{f}'(ξ)=\int_ℝ-𝗂xf(x)𝖾^{-𝗂xξ}𝖽x\]
so $\hat{f}$ is in $𝖢^∞(ℝ)$. Estimate using $|f(x)|≤𝖾^{-|x|}$
\begin{align*}
\left|\hat f^{(n)}(ξ)\right|&=\left|\int_ℝ(-𝗂x)^nf(x)𝖾^{-𝗂xξ}𝖽x\right|\\
&≤\int_ℝ|x|^n𝖾^{-|x|}𝖽x\\
&=2\int_0^∞x^n𝖾^{-x}𝖽x\\
&=2n!\\
⇒&\frac{1}{\limsup_{n→∞}|\frac{\hat f^{(n)}(ξ)}{n!}|^{\frac{1}{n}}}≥\frac12
\end{align*}
so the Taylor series for $\hat f$ has a radius of convergence of at least $\frac12$ everywhere. Thus, it is real analytic over all $ℝ$, by identity theorem cannot have compact support unless it is identically $0$.
Alternate proof:
By contradiction $∅⊊\operatorname{supp}\hat{f}⊊ℝ$ so the boundary $∂(\operatorname{supp}\hat{f})≠∅$. Select $ξ_0∈∂(\operatorname{supp}\hat{f})$. Then $\hat{f}^{(k)}(ξ_0)=0∀k$, so by Taylor expansion with Lagrange remainder
\[\hat{f}(ξ)=\sum_{k=0}^{n}\frac{\cancelto0{\hat{f}^{(k)}(ξ_0)}}{k!}(ξ-ξ_0)^k+\frac{\hat{f}^{(n+1)}(θ)}{(n+1)!}(ξ-ξ_0)^{(n+1)}\]
where $θ=θ_{n,ξ}$ lies between $ξ$ and $ξ_0$,
\[|\hat{f}(ξ)|=\frac{|\hat{f}^{(n+1)}(θ)|}{(n+1)!}|ξ-ξ_0|^{n+1}≤2|ξ-ξ_0|^{n+1}\]
If $|ξ-ξ_0|<1$ then taking $n→+∞$ yields $\tilde{f}(ξ)=0$. contradicting $ξ_0∈∂(\operatorname{supp}\tilde{f})$.
Nearest Point Projection In Uniformly Convex Banach Spaces
Theorem 4.12. Suppose $X$ is a uniformly convex Banach space. For any point $x$ and a nonempty closed convex set $K$, there is a nearest point to $x$ in $K$.
$L^p$ spaces with $1
Mean Curvature Of Graph Over Its Tangent Plane
Let $S$ be a regular surface in $\mathbb{R}^3$ and $p\in S$ a point on the surface.
By the implicit function theorem $S$ can be locally written as a graph of a function, e.g. $V\cap S = \{ (x,y,z) \in \mathbb{R}^3 : (x,y)\in U, z=f(x,y)\}$ for some open neighbourhood $V$ of $p$, open set $U\subset \mathbb{R}^2$ and some smooth function $f: U \rightarrow \mathbb{R}.$
By choosing local coordinates we can identify $U$ as part of the tangent plane of $S$ at $p$, furthermore we can set $f^{-1}(p)=(0,0)$. In this case, the mean curvature at $p$ is given by $H=\frac{f_{xx}+f_{yy}}{2}$ (average of second derivatives at $p$) and principal curvatures are $f_{xx},f_{yy}$.
MSE
integer solutions to $y^2+2=x^3$
Theorem 3.5. The only integer solutions to $y^2+2=x^3$ are $x=3, y=±5.$
Proof. Factor the equation as
\[\tag{3.2}
(y+\sqrt{-2})(y-\sqrt{-2})=x^3 .
\]
We claim that the two factors on the left are coprime (the only integers in $\mathbf{Z}[\sqrt{-2}]$ dividing both of them are units). Suppose, to the contrary, that some irreducible $\alpha$ divides both factors. Then $\alpha$ divides $(y+\sqrt{-2})-(y-\sqrt{-2})=2 \sqrt{-2}=-(\sqrt{-2})^3$. Now $\sqrt{-2}$ is irreducible in $\mathbf{Z}[\sqrt{-2}]$, since it has norm 2, so if it factors into two elements of $\mathbf{Z}[\sqrt{-2}]$, one of them must have norm 1 and hence be a unit. Therefore, by unique factorisation into irreducibles, $\alpha$ is an associate of $\sqrt{-2}$. Modifying $\alpha$ by a unit, we can assume that $\alpha=\sqrt{-2}$.
Thus $\sqrt{-2} \mid(y+\sqrt{-2})$, and so $\sqrt{-2} \mid y$. Taking norms, we see that $2 \mid y^2$, and hence $2 \mid y$. But then, returning to the original equation $y^2+2=x^3$, we see that $2 \mid x$, and hence $y^2 \equiv 6\pmod{8}$. This is impossible, and so indeed the two factors on the left in (3.2) are coprime.
Using unique factorisation again, it follows that both $y \pm \sqrt{-2}$ are associates of cubes in $\mathbf{Z}[\sqrt{-2}]$. Since the only units in $\mathbf{Z}[\sqrt{-2}]$ are $\pm 1$, and $-1$ is a cube, both $y \pm \sqrt{-2}$ are cubes. Suppose that
\[
y+\sqrt{-2}=(a+b \sqrt{-2})^3,
\]
where $a, b \in \mathbf{Z}$. Expanding out and comparing coefficients of $\sqrt{-2}$, we obtain
\[
1=b(3 a^2-2 b^2) .
\]
This is a very easy equation to solve over the integers. We must have either $b=-1$, in which case $3 a^2-2=-1$, which is impossible, or else $b=1$, in which case $3 a^2-2=1$ and so $a= \pm 1$. This leads to $y+\sqrt{-2}=(\pm 1+\sqrt{-2})^3$ and so $y= \pm 5$.
Composition Dirac Delta With A Function
Composition with a function
More generally, the delta distribution may be composed with a smooth function $g(x)$ in such a way that the familiar change of variables formula holds, that
$ {\displaystyle \int _{\mathbb {R} }\delta {\bigl (}g(x){\bigr )}f{\bigl (}g(x){\bigr )}\left|g'(x)\right|dx=\int _{g(\mathbb {R} )}\delta (u)\,f(u)\,du} $
provided that g is a continuously differentiable function with g′ nowhere zero.[40] That is, there is a unique way to assign meaning to the distribution $ \delta \circ g $ so that this identity holds for all compactly supported test functions f. Therefore, the domain must be broken up to exclude the g′ = 0 point. This distribution satisfies δ(g(x)) = 0 if g is nowhere zero, and otherwise if g has a real root at x0, then
$ {\displaystyle \delta (g(x))={\frac {\delta (x-x_{0})}{|g'(x_{0})|}}.} $
Example 1:
$2x-x^2$ maps $(0,1)$ to $(0,1)$
$(2x-x^2)'=2(1-x)>0∀x∈(0,1)$
$δ_n∈𝒟'(0,1)$ is a delta sequence, then $2(1-x)δ_n(2x-x^2)$ is a delta sequence.
Note that $\int_a^1 n(1-x)^{n-1} d x=(1-a)^n→\begin{cases}0&0Example 2:
$x^2$ maps $(0,1)$ to $(0,1)$
$(x^2)'=2x>0∀x∈(0,1)$
$δ_n∈𝒟'(0,1)$ is a delta sequence, then $2xδ_n(x^2)$ is a delta sequence.
Note that $\int_a^1 \frac{x^{\frac1n-1}}{n} d x=1-\sqrt[n]{a}→\begin{cases}0&0
Balancing Ext
$\DeclareMathOperator{\Hom}{Hom}\DeclareMathOperator{Tot}{Tot}
need to show
\[\Hom(A,I)→\Tot^Π \Hom(P,I)←\Hom(P,B)\]
these maps are quasi-isomorphism.
$\Leftrightarrow$ cones are acylic.
Observation: $\operatorname{Cone}\left(\Hom(A,I)→\Tot^Π\Hom(P,I)\right)$ is the total complex of the double complex
$\Hom(P,I)$ with $\Hom(A,I)[-1]$ for this augmented double complex we see that $\Tot^Π$ is exact using the acyclic assembly lemma: $\Hom(P_p,-)$ is exact since $P_p$ is projective, $\Hom(-,I^q)$ is exact since $I^q$ is injective.
\[R^*\Hom(A,-)(B)=H^*\Hom(A,I)≅H^*\Tot^Π(\Hom(P,I))≅H^*\Hom(P,B)=R^*\Hom(-,B)(A)\]
Artin's Lemma
Jacobson
Let $G$ be a finite group of automorphisms of a field $E$, and $F=E^{G}$. Then $[E:F]$ is finite and no larger than $|G|$.
Proof: Let $n = |G|$, and let $m$ be any integer greater than $n$. We shall show that any $m$ elements $x_{1}, x_{2}, \dots, x_{m}$ of $E$ are linearly dependent over $F$. This will prove that $[E:F]$ cannot exceed $n$.
Let $V$ be the $E$-vector space of solutions $(a_{1}, a_{2}, \dots, a_{m})$ of the $n$ simultaneous linear equations
\[e(x_{1})a_{1} + e(x_{2})a_{2} + \dots + e(x_{m})a_{m} = 0\]
for each element $e$ of $G$. Since $m > n$, this space $V$ has positive dimension. We shall show that it contains a nonzero vector all of whose coordinates are in $F$. The $e=1$ equation will then yield the $F$-linear dependence $a_{1}x_{1} + a_{2}x_{2} + \dots + a_{m}x_{m} = 0$ on the $x_{j}$.
In fact, we shall show that a minimal-weight nonzero vector in $V$ is necessarily proportional to one with all coordinates in $F$. Here the weight of a vector $(a_{1}, a_{2}, \dots, a_{m})$ is its number of nonzero coordinates, $\#\{j: a_j\ne0\}$. To do this, we shall use the fact that if $(a_{1}, a_{2}, \dots, a_{m})$ is in $V$, then so is $(e(a_{1}), e(a_{2}), \dots, e(a_{m}))$ for each $e$ in $G$. In fact, we show that in every nonzero $E$-vector subspace of $E^{m}$ that is invariant under $G$, a minimal-weight vector is proportional to one in $F^{m}$.
Suppose $b = (b_{1}, b_{2}, \dots, b_{m})$ is a nonzero vector of minimal weight. Then some $b_{j}$ is nonzero. Without loss of generality, we may assume that $b_{j} = 1$ (replace $b$ by $b/b_{j}$). For each $e$ in $G$, the vector $b' = b - e(b)$ then has weight strictly less than the weight of $b$, because $b'$ has zero coordinates wherever $b$ does, and its $j$-th coordinate vanishes where that of $b$ does not. Hence $b'$ must be the zero vector. Thus $b_{k} = e(b_{k})$ for each $k$ in $[1, m]$ and each $e$ in $G$. Therefore, each $b_{k}$ is in $E^{G} = F$, and we are done.
QED
Once we find that in fact $[E:F] = |G|$, we also obtain “independence of characters”: the elements of $G$ are $E$-linearly independent as functions from $E$ to $E$. To see this, suppose on the contrary that we had elements $c_{e}$ of $E$, not all zero, such that the sum of $c_{e}e$ over $e$ in $G$ was the zero function on $E$. Again let $F = E^{G}$, and set $n = [E:F] = |G|$. Let $(x_{1}, x_{2}, \dots, x_{m})$ be a basis for $E$ over $F$. Consider the square matrix $M$ over $E$ with $n$ rows $[e(x_{1}), e(x_{2}), \dots, e(x_{n})]$ (one row for each element $e$ of $G$). Then $cM = 0$ where $c$ is the row vector $(c_{e})$. Hence $M$ is degenerate, and thus has a nontrivial column kernel. But then Artin gives us a nonzero element $(a_{1}, \dots, a_{n})$ of the column kernel all of whose entries are in $F$. Taking $e=1$, we again obtain an $F$-linear dependence on our basis vectors $x_{i}$, reaching the desired contradiction.
QED
X Banach, Y
Use "if absolute convergence implies convergence in a normed space, then the space is a Banach space."
If $\sum‖x_n+Y‖<∞$
Since $‖x_n+Y‖=\inf_y‖x_n+y‖$, $∃y_n$ s.t. $‖x_n+y_n‖≤2^{-n}+‖x_n+Y‖$
$\sum‖x_n+y_n‖≤1+\sum‖x_n+Y‖$
323 post articles, 36 pages.