Every Finite Group Is A Galois Group
Every finite group is a Galois group is pretty well known (and in fact this post is basically just a transcription of the one in Lang’s Algebra)
The starting point here is the following theorem of Artin, telling us that we can carve out Galois extensions with any group of field automorphisms we like.
Theorem (Artin)
Let \(K\) be a field and \(G\) a finite group of field automorphisms of \(K\), then \(K\) is a Galois extension of the fixed field \(K^G\) with Galois group \(G\), moreover \([K:K^G] = \#G\).
Proof
Pick any \(\alpha \in K\) and consider a maximal subset \(\{\sigma_1, \ldots, \sigma_n\}\subseteq G\) for which all \(\sigma_i \alpha\) are distinct. Now any \(\tau \in G\) must permute the \(\sigma_i \alpha\) as it is an automorphism and if some \(\tau\sigma_i \alpha \ne \sigma_j\alpha\) for all \(j\) then we could extend our set of \(\sigma\)s by adding this \(\tau\sigma_i\).
So \(\alpha\) is a root of
\begin{equation*}
f_\alpha(X) = \prod_{i=1}^n (X- \sigma_i\alpha)\text{,}
\end{equation*}
note that \(f_\alpha\) is fixed by \(\tau\) by the above. So all the coefficients of \(f_\alpha\) are in \(K^G\). By construction \(f_\alpha\) is a separable polynomial as the \(\sigma_i\alpha\) were chosen distinct, note that \(f_\alpha\) also splits into linear factors in \(K\).
The above was for arbitrary \(\alpha \in K\) so we have just shown directly that \(K\) is a separable and normal extension of \(K^G\), which is the definition of Galois extension. As every element of \(K\) is a root of a polynomial on \(K^G\) of degree \(n\), the extension degree \([K:K^G] ≤ n\). But we also have a group of \(n\) automorphisms of \(K\) that fix \(K^G\) so \([K : K^G] \ge n\) and hence \([K : K^G] = n\).
So now with this in hand we just have to realise our group as a group of field automorphisms of some field.
Corollary
Every finite group is a Galois group.
Proof
Let \(k\) be an arbitrary field, \(G\) any finite group. Now take \(K = k(\overline g:g\in G)\) (i.e. adjoin all elements of \(G\) to \(k\) as indeterminates, denoted by \(\overline g\)). Now we have a natural action of \(G\) on \(K\) defined via \(h\cdot \overline g= \overline {hg}\) and extending \(k\)-linearly. Now \(K\) and \(G\) satisfy the statement of Artin's theorem and hence \(K/K^G\) is a Galois extension with Galois group \(G\).
It is interesting to note that we could have started with any field we liked and built a Galois extension with both fields extensions of the base we picked.
They won’t necessarily share a huge amount with it, however it is interesting to note that the characteristic will have to be the same and so we can do this for whatever our favourite characteristic is.
§3.5. Proof of Bezout's Theorem, weak form.
Recall: Theorem 3.1. Let $C,D$ be curves in $ℂ ℙ^2$ of degrees $m,n$, with no common component. Then $C,D$ intersect in at most $mn$ points.
Proof. Let $C, D$ be curves in $ℂ ℙ^2$ of degrees $m, n$, defined by $P(x, y, z), Q(x, y,z)$ of degrees $m, n$. Suppose $p_1, …,p_{m n+1}$ are distinct point in $C,D$. We will show that $P, Q$ have a common factor, so $C, D$ have a common component.
Choose a point $q ∈ ℂ ℙ^2, q ∉ C, q ∉ D$, and $q$ does not lie on any line through two of $p_1, …, p_{mn+1}$. After applying a projective transformation, can assume $q=[1,0,0]$. Let $p_j=[a_j, b_j, c_j]$, $b_j,c_j$ not both zero as $p_j ≠[1,0,0]$. Consider $R(y, z)=R_{P, Q}$
Now: $x-a_j$ divides $P(x, b_j, c_j)$ and $Q(x, b_j, c_j)$
as $p_j ∈ C ∩ D$
$⇒ R(b_j, c_j)=0$ by Cor 3.8.
$⇒(c_j y-b_j z) ∣ R(y, z)$, as $R$ homogeneous.
As $[1,0,0]$ does not lie on line through $(a_i, b_i, c_i),(a_j, b_j, c_j)$, $c_i y-b_i z, c_j y-b_j z$ are coprime, not proportional, for $i ≠ j$.
Hence $(c_1 y-b_1z) ⋯(c_{mn+1} y-b_{mn+1} z) ∣ R(y, z)$.
But $R$ is homogeneous of degree $mn$, by Lemma 3.10, and L.H.S. has degree $m n+1$, so $R=0$.
Here $P(x, y, z), Q(x, y, z)$ have a common factor by Lemma 3.11, say $P=X ⋅ y, Q=X ⋅ z$. Then any component of the curve $X=0$ is a component of (and $D$, so $C,D$ have a common component.
Conversely, if $C, D$ have no common component, there are at most $mn$ distinct point in $C∩D$. ∎
Define the intersection multiplicity $I_p(C, D)$ of $C, D$ at $p=[a, b, c]$ to be the largest positive integer $k$ such that $(cy-b z)^k ∣ R(y, z)$.
If $p ∉ C ∩ D$, define $I_p(C,D)=0$.
The following two propositions are proved in Kirman, §3.1.
Proposition 3.12. The definition of $I_p(C, D)$ is independent of the choice of $q$ and projective transformation.
Proposition 3.13. Let $p∈C∩D$. Then $I_p(C, D)=1$ if $p$ is a nonsingular point of $C$ and $D$, and the tangent lines to $C$ and $D$ at $P$ are distinct.
Sheet3 Q6 A Theorem Of Paley And Wiener
(a) Let $f ∈ \mathrm{L}^2(ℝ)$ and assume that $f(x)=0$ for a.e. $x>0$. Prove that the function
\[
F(ζ) ≝ ∫_{-∞}^0 f(x) \mathrm{e}^{-\mathrm{i} ζ x} \mathrm{d} x
\]
is well-defined and holomorphic in the upper half-plane $ℍ=\{ζ ∈ ℂ: \operatorname{Im}(ζ)>0\}$. Show that $F$ satisfies
\[
\frac{1}{2 π} ∫_{ℝ}|F(ξ+\mathrm{i} η)|^2 \mathrm{d} ξ=∫_{-∞}^0|f(x)|^2 \mathrm{e}^{2 η x} \mathrm{d} x ≤\|f\|_2^2
\]
for all $η>0$. Next, show that
\[
F(⋅+\mathrm{i} η) → \widehat{f} \text { in } \mathrm{L}^2(ℝ) \text { as } η \searrow 0
\]
Solution.
$\frac{f(ξ+z)-f(z)}{z}=\int_{-∞}^0f(x)\mathrm{e}^{-\mathrm{i} ξ x}\frac{\mathrm{e}^{-\mathrm{i} ξ z}-1}{z}\mathrm{d}x$
Let $z=α+\mathrm{i}β$,
\[|f(x)e^{-\mathrm{i}ξx}\frac{e^{-izx}-1}{z}|≤\frac12|f(x)|^2+⋯\\
≤\frac12|f(x)|^2+\frac12|x|^2e^{(2η-2β)x}\]
for $|β|<η$
so $F$ is holomorphic.
(b) Optional. Assume $F: ℍ → ℂ$ is a holomorphic function satisfying the bound
\[
\sup_{η>0} ∫_{ℝ}|F(ξ+\mathrm{i} η)|^2 \mathrm{d} ξ<+∞ .
\]
Prove that $F(⋅+\mathrm{i} η)$ converges in $\mathrm{L}^2(ℝ)$ as $η ↘ 0$ to the Fourier transform $\widehat{f}$ of a function $f ∈ \mathrm{L}^2(ℝ)$ vanishing a.e. on $(0, ∞)$.
Sheet 2 Q6
This question deals with how to define tangent lines at singular points. For simplicity we work in $ℂ^2$ rather than $ℂℙ^2$.
Let $C$ be a curve in $ℂ^2$ defined by $Q(x, y)=0$ for $Q(x, y)$ a complex polynomial without repeated factors. Define the multiplicity $m$ of $C$ at a point $(a, b) ∈ C$ to be the smallest positive integer $m$ such that some $m^{\text{th}}$ partial derivative of $Q$ at $(a, b)$ is nonzero (so $(a, b)$ is a singularity of $C$ if and only if $m>1$). Consider the polynomial
\[
\sum_{i+j=m} \frac{∂^m Q}{∂ x^i ∂ y^j}(a, b) \frac{(x-a)^i(y-b)^j}{i ! j !} .
\]
(i) Show this factorizes as a product of $m$ linear polynomials of the form\[α(x-a)+β(y-b).\]The lines defined by the vanishing of these linear polynomials are called the $m$ tangent lines to $C$ at $(a, b)$.
(ii) Show that if $m=1$ this definition agrees with the definition given in lectures for the tangent line at a nonsingular point.
(iii) For the nodal cubic $y^2=x^3+x^2$ and the cuspidal cubic $y^2=x^3$, find all the singular points in $ℂ^2$, and for each singular point, find the multiplicities and tangent lines.
Solution.
(i) This is a homogeneous polynomial in two variables $x-a,y-b$ of degree $m$.
By Q1 it factorizes as a product of linear polynomials over $ℂ$.
(ii) If $m=1$, at a nonsingular point, the polynomial
\[
\frac{∂Q}{∂x}(a,b)(x-a)+\frac{∂Q}{∂y}(a,b)(y-b)
\]
agrees with the definition in page 18.
(iii) For $Q=y^2-x^3-x^2,\frac{∂Q}{∂y}=2y=0⇒y=0$
$\frac{∂Q}{∂x}=-3x^2-2x=0⇒x=0$ or $x=-\frac23$
$Q(0,0)=0,Q(-\frac23,0)≠0$.
so the curve has 1 singular point $(0,0)$.
$\frac{∂^2Q}{∂y^2}(0,0)=2≠0$
so multiplicity $m=2$, the polynomial is\[\sum_{i+j=2} \frac{∂^2 Q}{∂ x^i ∂ y^j}(a, b) \frac{x^iy^j}{i ! j !}=y^2-x^2=(y-x)(y+x)\]so 2 tangent lines $y±x=0$ at $(0,0)$.
For $Q=y^2-x^3,\frac{∂Q}{∂y}=2y=0⇒y=0$
$\frac{∂Q}{∂x}=-3x^2=0⇒x=0$
$\frac{∂^2Q}{∂y^2}(0,0)=2≠0$
$Q(0,0)=0$ so the curve has 1 singular point $(0,0)$ with multiplicity $m=2$, the polynomial is $y^2$, so 2 tangent lines $y=0$ at $(0,0)$.
Q Irreducible Over P If P Is A Primitive Root Mod Q
Proof:
The hypothesis is that $p^{q-1}$ is the smallest power of $p$ that is congruent to $1$ modulo $q$.
Now, what are the orders of the (cyclic) groups $\Bbb F_{p^m}^\times$? They are, of course, $p^m-1$, and a field $\Bbb F_{p^m}$ contains a primitive $q$-th root of unity if and only if $q|(p^m-1)$, i.e. if and only if $p^m\equiv1\pmod q$. Thus our hypothesis says that $\Bbb F_{p^{q-1}}$ is the first extension of $\Bbb F_p$ that contains a $q$-th root of unity $\zeta_q$. In other words, $\zeta_q$ generates an extension of degree $q-1$, so $X^{q-1}+\cdots+X+1$ is irreducible over $\Bbb F_p$
Elementary Proof Of The Uniform Boundedness Theorem
Arxiv: Alan D. Sokal
Uniform boundedness theorem.
$X$ a Banach space
$Y$ a normed linear space
$ℱ$ a family of bounded linear operators from $X$ to $Y$.
If $ℱ$ is pointwise bounded (i.e., $\sup_{T ∈ ℱ} \|Tx\| < ∞$ for all $x ∈ X$),
then $ℱ$ is norm-bounded (i.e., $\sup_{T ∈ ℱ} \|T\| < ∞$).
Lemma. Let $T$ be a bounded linear operator from a normed linear space $X$ to a normed linear space $Y$.
Then for any $x ∈ X$ and $r > 0$, we have
\[
\sup\limits_{x' ∈ B(x,r)} \| Tx' \| ≥ \|T\| r ,
\]
where $B(x,r) = \{x' ∈ X \colon\: \|x'-x\| < r \}$.
Proof. For $ξ ∈ X$ we have
\[
\max\{ \| T(x+ξ) \| , \| T(x-ξ) \| \}
≥
\frac12 [ \| T(x+ξ) \| + \| T(x-ξ) \| ]
≥
\| T ξ \| ,
\]
where the second $≥$ uses the triangle inequality in the form $\| α-β \| ≤ \|α\| + \|β\|$.
Now take the supremum over $ξ ∈ B(0,r)$.
∎
Proof of the uniform boundedness theorem.
Suppose $ℱ$ is not norm-bounded, i.e. $\sup_{T ∈ ℱ} \|T\| = ∞$,
and choose $(T_n)_{n=1}^∞$ in $ℱ$ such that $\|T_n\| ≥ 4^n$.
Then set $x_0 = 0$, and for $n ≥ 1$ use the lemma to
choose inductively $x_n ∈ X$
such that $\| x_n - x_{n-1} \| ≤ 3^{-n}$
and $\| T_n x_n \| ≥ \frac{2}{3} 3^{-n} \| T_n \|$.
The sequence $(x_n)$ is Cauchy, hence convergent to some $x ∈ X$;
and it is easy to see that
$\| x-x_n \| ≤\sum_{i=n+1}^{∞} 3^{-i}=\frac{1}{2} 3^{-n}$ and hence $\|T_nx-T_nx_n\|≤\frac{1}{2} 3^{-n}$.
By triangle inequality
\[\| T_n x \| ≥\|T_nx_n\|-\|T_nx-T_nx_n\|≥\frac{1}{6} 3^{-n} \| T_n \| ≥ \frac{1}{6} (4/3)^n
→ ∞
\]
so ℱ is not pointwise bounded.
∎
Sheet3 Q5
Define for $ϕ ∈ 𝒮(ℝ)$ the function
\[
Φ(z)≝\frac{1}{π \mathrm{i}} ∫_{ℝ} \frac{ϕ(t)}{t-z} \mathrm{d} t, z ∈ ℍ,
\]
where $ℍ=\{z ∈ ℂ: \operatorname{Im}(z)>0\}$.
Prove that $Φ$ is holomorphic. Next, using for instance the formula
\[
Φ(x+\mathrm{i} y)=\frac{1}{π \mathrm{i}}⟨(t-\mathrm{i} y)^{-1}, τ_x ϕ⟩
\]
and an identity from question 3 on Problem Sheet 2, show that $Φ$ extends by continuity to the closed upper half plane $\overline{ℍ}$ and that
\[
Φ(x+\mathrm{i} 0)≝\lim_{y↘0} Φ(x+\mathrm{i} y)=ϕ(x)+\mathrm{i} ℋ(ϕ)(x),
\]
where $ℋ(ϕ)$ is the Hilbert transform of $ϕ$.
Solution.
To show $Φ$ is holomorphic, we just need to show it is complex differentiable. Fix $z∈ℍ$, $\frac{Φ(z+h)-Φ(z)}{h}=\frac1{πi}\int_{-∞}^{∞}\frac{Φ(t)}{h}(\frac1{t-(z+h)}-\frac1{t-z})dt=\frac1{πi}\int_{-∞}^{∞}\frac{Φ(t)}{(t-(z+h))(t-z)}\xrightarrow{\text{DCT}}\frac{1}{πi}\int_{-∞}^{∞}\frac{ϕ(t)}{(t-z)^2}dt,\text{ as }|h|→0$.
\[
\lim_{y↘0}Φ(x+\mathrm{i} y)=\lim_{y↘0}\frac{1}{π \mathrm{i}}⟨(t-\mathrm{i} y)^{-1}, τ_x ϕ⟩\\
=\frac{1}{π\mathrm{i}}⟨(t-\mathrm{i}0)^{-1},τ_xϕ⟩\\
=\frac{1}{π\mathrm{i}}⟨\operatorname{pv}(\frac{1}{t})+π\mathrm{i}δ_0,τ_xϕ⟩\\
=\mathrm{i}⟨\operatorname{pv}(\frac{1}{πt}),τ_xϕ⟩+ϕ(x)\\
=\mathrm{i}ℋ(ϕ)(x)+ϕ(x)
\]
Sheet 2 Q5
Let $A$ and $B$ be two symmetric $3 × 3$ complex matrices and suppose the equation $\det(x A-B)=0$ has three distinct solutions $λ, μ, ν$.
(i) Show that there is an invertible matrix $P$ such that $P^T A P=I$ and $P^T B P=\operatorname{diag}(λ, μ, ν)$.
(ii) Deduce that, after a projective transformation, the equations of the conics defined by $A$ and $B$ can be put in the form
\[
x^2+y^2+z^2=0, λ x^2+μ y^2+ν z^2=0 .
\]
(iii) Show that these two conics intersect in four distinct points.
Proof.
(i) Since $\det(x A-B)=(\det A)x^3+⋯$ has three roots, $\det A≠0$. By theorem 6 exist an invertible matrix $Q$ such that $Q^TAQ=I$. Write $B'=Q^TBQ$, so that $B'$ is also symmetric. Then
\[
\det Q^T \det(λ A-B) \det Q=\det(Q^T(λA-B) Q)=\det(λ I-B')
\]
and so the roots of $\det(λ A-B)=0$ are the eigenvalues of $B'$. By assumption these are distinct, so we have a basis of eigenvectors $v_1,v_2,v_3$ with eigenvalues $λ_1,λ_2,λ_3$.
Let $v_l,v_k$ be eigenvectors with eigenvalues $λ_l≠λ_k$. Since $B'$ is symmetric,
\[
λ_lv_l^Tv_k=(B'v_l)^Tv_k=v_l^T(B'v_k)=λ_kv_l^Tv_k
\]
and since $λ_l≠λ_k$, we have
\[
v_l^Tv_k=0
\]
Thus $v_1,v_2,v_3$ is an orthogonal basis.
Replace $v_i$ by $v_i\over v_i^Tv_i$ [no conjugate, not normalize] and obtain an orthogonal basis such that $v_i^Tv_i=1$.
If $v_i^Tv_i=0$ then $v_i^Tv_{1,2,3}=0$, then $v_i^Tw=0∀w∈ℂ^3$ contradicting $v_i^T\bar{v_i}≠0$.
If $R$ is the change of basis matrix, $R^TB'R=\operatorname{diag}(λ_1,λ_2,λ_3)$ and $R^TR = I$. Putting $P=QR$ we get the result.
A more rigorous approach is to invoke the standard theorem on the simultaneous diagonalization of two symmetric matrices by congruence, which applies here because $\det(xA-B)=0$ has distinct roots.
(ii) Since
\[v^T Av=(P^{-1}v)^T(P^TAP)(P^{-1}v)=(P^{-1}v)^TI(P^{-1}v)\]
after the projective transformation $[v]↦[P^{-1}v]$, the equation is $x^2+y^2+z^2=0$.
Since
\[v^T Bv=(P^{-1}v)^T(P^TBP)(P^{-1}v)=(P^{-1}v)^T\operatorname{diag}(λ_1,λ_2,λ_3)(P^{-1}v)\]
after the projective transformation $[v]↦[P^{-1}v]$, the equation is $λ x^2+μ y^2+ν z^2=0$.
(iii) the equations are projective lines in $x^2,y^2,z^2$, so they intersect in a unique point
\[[x^2,y^2,z^2]=[μ-ν,ν-λ,λ-μ]\]
so the conics intersect in four distinct points $[x,y,z]=[\sqrt{μ-ν}, ± \sqrt{ν-λ}, ± \sqrt{λ-μ}]$.
Classify conics $C⊆CP^2$ up to projective transformations
nonsingular$⇒x^2+y^2+z^2=0$
Classify pairs of conics $C,D$ up to projective transformations?
Question suggests ∃projective transformation such that $C:x^2+y^2+z^2=0,D:λx^2+μy^2+νz^2=0$. If $λ,μ,ν$ are distinct, $C,D$ intersect at 4 distinct points.
What if $λ,μ,ν$ are not distinct?
Say $μ=ν≠λ$.
$C,D$ tangent at two points $[0,1,±1]$.
Two ways to conjugate matrices $A↦P^TAP,A↦P^{-1}AP$.
Changing $C,D$ by projective transformation corresponds to $(A,B)↦(P^TAP,P^TBP)$. $P$ invertible.
Then $A^{-1}↦P^{-1}A(P^T)^{-1}$.
So $\underbrace{A^{-1}B}_{\text{no longer symmetric}}↦P^{-1}A^{-1}(P^T)^{-1}P^TBP=P^{-1}(A^{-1}B)P$
Fact: matrices up to conjugation are classified by Jordan canonical form.
\begin{pmatrix}
λ\\&μ\\&&ν
\end{pmatrix}
\begin{pmatrix}
λ\\&μ\\&&μ
\end{pmatrix}
\begin{pmatrix}
λ\\&μ&1\\&&μ
\end{pmatrix}
\begin{pmatrix}
λ\\&λ\\&&λ
\end{pmatrix}
Case: three intersection(1 tangent) correspond to $A^{-1}B$ conjugate to $\begin{pmatrix}λ\\&μ&1\\&&μ\end{pmatrix}$
$A,B$ can't be mapped to diagonal matrices
Paper 2020 Q3
(a) Let $p:(\tilde{X}, \tilde{b}) \rightarrow(X, b)$ be a based covering map between path-connected spaces. Define the degree of $p$ and show that it is well-defined (that is, independent of the choice of $b$).
Assume that $X$ is triangulated such that each simplex is contained in an elementary open set of $p$ and that the covering is of finite degree $d$. Derive a formula relating the Euler characteristics of $\tilde{X}$ and $X$.
Explain clearly the correspondence between covering spaces and subgroups of the fundamental group of $(X, b)$.
Answers:
[bookwork]
For each $k$-simplex in $X$ there are exactly $d k$-simplices in $\tilde{X}$. Hence $\chi(\tilde{X})=d \chi(X)$. [bookwork]
(b) Find the fundamental group of the space of orbits of the additive group $\mathbb{Z}$ of integers acting on $\mathbb{R}^{n} \backslash\{0\}$ (for $n>2$) by $m \bullet x=2^{m} x$, for $m \in \mathbb{Z}$, carefully justifying your answer.
Answer:
Let $X$ be the space of orbits $[x]$ for $x \in \mathbb{R}^{d} \backslash\{0\}:=\tilde{X}$ and $\pi: \tilde{X} \rightarrow X$ be the quotient map.
Claim: $\pi_{1}(X)=\mathbb{Z}$.
Proof: for each point $x \in \tilde{X}$ there exists a small (contractible) neighbourhood $U$ such that all the translates $m \bullet U$ for $m \in \mathbb{Z}$ are disjoint. The images under $\pi$ are all the same defining an elementary neighbourhood $\bar{U}$ of $[x]$. Thus $\pi$ is a regular covering map. Note $\tilde{X}$ is homotopy equivalent to the sphere $S^{n-1}$. By the correspondence theorem and as $\pi_{1}\left(S^{n-1}\right)=0$ for $n>2, \pi_{1}(X)=\mathbb{Z}$.
(c) Let $S_{g}$ denote the fundamental groups of an oriented surface of genus $g$.
(i) Show that $S_{g}$ is isomorphic to a finite index normal subgroup of $S_{2}$ if $g \geqslant 2$.
(ii) Show that $S_{g}$ is isomorphic to an index $n$ normal subgroup of $S_{h}$ if $g=n h-n+1$.
(iii) Show that any finite index subgroup of $S_{k}$ is isomorphic to $S_{g}$ for some $g$. Derive a necessary condition on $g$ in terms of $k$ for this to happen, and in particular show that $g \geqslant k$.
Let $N_{g}$ denote the fundamental group of a non-orientable surface of genus $g$. Is it true that every finite index subgroup of $N_{g}$ is isomorphic to $N_{k}$ for some $k$? Justify your answer.
Answers:
(i) The oriented surface $F_{g}$ of genus $g$ is a torus with another $g-1$ tori glued onto it (connected sum) which we arrange so that they can be cyclically permuted via a free action of $C_{g-1}$. This defines a regular covering $\pi: F_{g} \rightarrow F_{2}$ of degree $g-1$, and hence $S_{g}$ is a normal subgroup of $S_{2}$ of index $g-1$.
(ii) If $g=n(h-1)+1$ then we can think of $F_{g}$ as a torus with $n$ surfaces $F_{h-1}$ attached to it which can be freely cyclically permuted by $C_{n}$ defining a normal covering $\pi: F_{g} \rightarrow F_{h}$.
(iii) A finite index subgroup corresponds to a finite degree covering $\pi: \tilde{X} \rightarrow F_{k}$. But a covering of a compact oriented surface is again a compact oriented surface, say $F_{g}$. Hence the subgroup is $S_{g}$. The Euler characteristic formula tells us that we must have $2 g-2=n(2 k-2)$.
The covering of a non-orientable surface might be orientable, e.g. $\pi: S^{2} \rightarrow \mathbb{R} P^{2}$. Indeed, the fundamental group of any $N_{g}$ is non-trivial and the subgroup $\{e\}$ is a counter example.
323 post articles, 36 pages.