-
- Let $Z_n,n≥1$, be random variables and $c∈ℝ$. Show that $Z_n→c$ in probability if and only if $Z_n→c$ in distribution.
- Fix $λ∈(0,∞)$. For each $r∈(0,∞)$, consider a random variable $X_r$ with probability density function
$$
f_{r,λ}(x)=\begin{cases}\frac{1}{Γ(r)}x^{r-1}λ^r e^{-λ x},&x∈(0,∞),\\0,&\text { otherwise. }\end{cases}
$$
Recall that this means that $X_r$ has the Gamma distribution with shape parameter $r$ and rate parameter $λ$.
- Carefully derive the moment generating function of $X_r$.
-
Show that $X_r/r$ converges in distribution as $r→∞$. Does this convergence hold in probability?
[You may use standard theorems about moment generating functions without proof.]
- Define the Poisson process $\left(N_t,t≥0\right)$ of rate $λ∈(0,∞)$ in terms of properties of its increments over disjoint time intervals.
- Show that the first arrival time $T_1=\inf\left\{t≥0:N_t=1\right\}$ is exponentially distributed with parameter $λ∈(0,∞)$.
- Show that $T_n=\inf\left\{t≥0:N_t=n\right\}$ has a Gamma distribution for all $n≥1$.
[If you use the inter-arrival time definition of the Poisson process, you are expected to prove that it is equivalent to the definition given in (i).] - Let $R_n,n≥1$, be independent Gamma$(n,λ)$ random variables.
Let $Y_t=\#\left\{n≥1:R_n≤t\right\},t≥0$. Show that $Y_t$ is not a Poisson process with rate $λ$, but does satisfy $ℙ\left(Y_t<∞\text{ for all }t≥0\right)=1$.
[Hint: Let $B_n=1_{\left\{R_n≤t\right\}}$ and write $Y_t=\sum_{n≥1}B_n$.]
- State the Central Limit Theorem.
- Let $(R,S)$ be a pair of random variables with joint probability density function
$$
f(r,s)=\begin{cases}\frac{1}{4}e^{-{|s|}},&(r,s)∈[-1,1]×ℝ\\0,&\text { otherwise. }\end{cases}
$$
Also consider independent identically distributed random variables $\left(R_n,S_n\right),n≥1$, with the same joint distribution as $(R,S)$.
- Find the marginal probability density functions of $R$ and $S$.
- For any $s∈ℝ$, determine $$ \lim _{n→∞}ℙ\left(\frac{1}{\sqrt{n\operatorname{var}(S)}}\sum_{k=1}^n S_n≤s\right) $$
- For any $r,s∈ℝ$, show that $$ \lim _{n→∞}ℙ\left(\frac{1}{\sqrt{n\operatorname{var}(R)}}\sum_{k=1}^n R_n≤r,\frac{1}{\sqrt{n\operatorname{var}(S)}}\sum_{k=1}^n S_n≤s\right)=ℙ(W≤r,Z≤s) $$ for a pair of random variables $(W,Z)$ whose joint distribution you should determine.
- Consider the transformation $T:ℝ^2→ℝ^2$ given by $T(x,y)=(x-y,x+y)$. Let $(R,S)$ be as in (b) and $(X,Y)$ such that $(R,S)=T(X,Y)$.
- Derive the joint probability density function of $(X,Y)$.
- Find the marginal probability density functions of $X$ and $Y$.
- Find the correlation of $X$ and $Y$.
- Consider a Markov chain on a countable state space $S$ and let $i ∈ S$.
- Define the notions of recurrence and positive recurrence of $i$.
- Suppose that $i$ is positive recurrent. State, without proof, the ergodic theorem for the long-term proportion of time the Markov chain spends in state $i$.
- An urn contains a total of $N ≥ 2$ balls, some white and the others black. Each step consists of two parts. A first ball is chosen at random and removed. A second ball is then chosen at random from those remaining. It is returned to the urn along with a further ball of the same colour. Denote by $Y_n$ the number of white balls after $n ≥ 0$ steps.
- Explain briefly why $\left(Y_n, n ≥ 0\right)$ is a Markov chain and determine its state space and transition matrix.
- Determine the communicating classes of this Markov chain and say whether their states are recurrent, and whether they are aperiodic. Justify your answers.
- Find all stationary distributions of this Markov chain.
- Now consider a Markov chain $\left(Z_n, n ≥ 0\right)$, on $I=\{0,1,2, …, N\}$ with the transition matrix $P$ whose non-zero entries are
$$
p_{k, j}= \begin{cases}\frac{N-k}{N} \frac{k+1}{N+1} & \text { if } j=k+1, \\ \frac{N-k}{N} \frac{N-k}{N+1}+\frac{k}{N} \frac{k}{N+1} & \text { if } j=k, \\ \frac{k}{N} \frac{N-k+1}{N+1} & \text { if } j=k-1 .\end{cases}
$$
- Show that the uniform distribution is stationary for this Markov chain. Hence, or otherwise, determine all stationary distributions of this Markov chain.
- For a state $k ∈ I$, consider the successive visits $$ V_1^{(k)}=\inf \left\{n ≥ 1: Z_n=k\right\} \text { and } V_{m+1}^{(k)}=\inf \left\{n ≥ V_m^{(k)}+1: Z_n=k\right\}, m ≥ 1 . $$ Explain why visits to $k$ occur in groups of independent geometrically distributed consecutive visits, and determine the parameter of this geometric distribution.
- Determine the expected time between two groups of visits to state $k$.
- Is the following statement true or false? ‘For any two states $k_1 ≠ k_2$, there is, on average, one visit to $k_2$ between the first and second visits to $k_1$.’ Provide a proof or counterexample.
- Consider a Markov chain on a countable state space $S$ and let $i ∈ S$.
Solution
- [Sheet 1 Q6] If $Z_n\overset d→c$, since $F_c$ is continuous on $ℝ∖\{c\}$, for any $ϵ>0$, \begin{align*} ℙ\left(\left|Z_n-c\right|≥ϵ\right) &=ℙ\left(Z_n≤c-ϵ\right) +ℙ\left(Z_n≥c+ϵ\right)\\ &= F_{Z_n}(c-ϵ)+1-F_{Z_n}(c+ϵ)\\&→0+1-1=0.\text{ So }Z_n\overset P→c.\end{align*}If $Z_n\overset P→c$, we need to prove for any $ϵ>0$, $F_{Z_n}(c-ϵ)→0,F_{Z_n}(c+ϵ)→1$.\[ℙ(Z_n≤c-ϵ)≤ℙ(\left|Z_n-c\right|>ϵ)→0\]So $F_{Z_n}(c-ϵ)→0$\[ℙ(Z_n≥c+ϵ)≤ℙ(\left|Z_n-c\right|>ϵ)→0\]So $F_{Z_n}(c+ϵ)→1$
- [Sheet 2 Q2] for $t<λ$,\begin{align*}M_{X_r}(t)&=∫_0^∞e^{tx}\frac1{Γ(r)} λ^r x^{r-1} e^{-λ x}dx\\ &=\frac{λ^r}{(λ-t)^r}∫_0^∞\underbrace{\frac1{Γ(r)}(λ-t)^r x^{r-1} e^{-(λ-t)x}}_{\text{p.d.f for }Γ(r, λ-t)}dx\\ &=\left(1-\frac tλ\right)^{-r}\end{align*}
- For $t<rλ:M_{X_r/r}(t)=M_{X_r}\left(t\over r\right)=\left(1-\frac t{rλ}\right)^{-r}→e^{t/λ}$
By convergence theorem $X_r/r\overset d→1/λ$.
By (a), $X_r/r\overset p→1/λ$.
- The counting process $N_t, t ≥ 0$ is a Poisson process of rate $λ$ if:
- $N_0=0$.
- If $\left(s_1, t_1\right),\left(s_2, t_2\right), …,\left(s_k, t_k\right)$ are disjoint intervals in $ℝ_{+}$, then the increments $N\left(s_1, t_1\right]$, $N\left(s_2, t_2\right], …, N\left(s_k, t_k\right]$ are independent, where $N\left(s_i, t_i\right]=N_{t_i}-N_{s_i}$.
- For any $s<t$, the increment $N(s, t]$ has Poisson distribution with mean $λ(t-s)$.
- $0=N_0≤N_1≤⋯≤N_t$, so $T_1>t⇔N_0=⋯=N_t=0⇔N_t=0$
$N_t∼\operatorname{Poisson}(λt)⇒ℙ(T_1>t)=ℙ(N_t=0)=e^{-λt}⇒T_1∼\operatorname{Exp}(λ)$. - Let $X_n∼\operatorname{Gamma}(n,λ)$. By Taylor's formula with integral remainder, \[e^{λt}=\sum_{k=0}^{n-1}\frac{(λt)^k}{k!}+∫_0^t\frac{1}{(n-1)!}x^{n-1}λ^n e^{λ(t-x)}dx\]Therefore \begin{split}ℙ(X_n≤t)&=∫_0^t\frac{1}{(n-1)!}x^{n-1}λ^n e^{-λx}dx\\&=1-e^{-λt}\sum_{k=0}^{n-1}\frac{(λt)^k}{k!}\end{split} On the other hand, $ℙ(T_n≤t)=ℙ(N_t≥n)=1-e^{-λt}\sum_{k=0}^{n-1}\frac{(λt)^k}{k!}$. So $T_n=X_n$.
- In Poisson process $ℙ(T_1>T_2)=0$ so $T_1,T_2$ are not independent, but $R_1,R_2$ are independent, so $Y_t$ is not a Poisson process.
Let $B_n=1_{\left\{R_n≤t\right\}}$ and write $Y_t=\sum_{n≥1}B_n$ \begin{split}𝔼[Y_t]&=\sum_{n≥1}𝔼[B_n]\\&=\sum_{n≥1}ℙ(R_n≤t)\\&=\sum_{n≥1}∫_0^t\frac{1}{(n-1)!}x^{n-1}λ^n e^{-λx}dx\\&=∫_0^t\sum_{n≥1}\frac{1}{(n-1)!}x^{n-1}λ^n e^{-λx}dx\\&=∫_0^tλdx=λt<∞\end{split}So $ℙ\left(Y_t<∞\text{ for all }t≥0\right)=1$
- The counting process $N_t, t ≥ 0$ is a Poisson process of rate $λ$ if:
- Let $X_1,X_2,…$ be i.i.d. random variables with mean $μ$ and variance $σ^2∈(0,∞)$.
Let $S_n=X_1+X_2+⋯+X_n$. Then ${S_n-nμ\over\sqrt nσ}\stackrel d→N(0,1)$ as $n→∞$. - $f_R(r)=∫ f(r,s)\mathrm{~d}s=2∫_0^∞\frac14e^{-s}\mathrm{~d}s=\frac12$ for $r∈[-1,1]$; otherwise $f_R(r)=0$
$f_S(s)=∫f(r,s)\mathrm{~d}r=∫_{-1}^1\frac14e^{-{|s|}}\mathrm{~d}r=\frac12e^{-{|s|}}$ - $𝔼[S]=∫_{-∞}^∞s\frac12e^{-{|s|}}\mathrm{~d}s=0$, by symmetry$$\operatorname{var}(S)=𝔼\left[S^2\right]=2∫_0^∞s^2\frac{1}{2}e^{-s}d s=2<∞$$$S_n$ are iid with mean 0, and variance 2, by CLT $\frac1{\sqrt{2n}}\sum_{k=1}^n S_n\stackrel d→Z$. For all $s∈ℝ$$$ℙ\left(\frac1{\sqrt{2n}}\sum_{k=1}^n S_n≤s\right)→ℙ(Z≤s),\text{ where $Z∼N(0,1)$.}$$
- $R∼U[-1,1]⇒𝔼(R)=0$, $\operatorname{var}(R)=\frac13<∞$. $R_n$ are iid, by CLT\[ℙ\left(\sqrt{\frac3n}\sum_{k=1}^n R_n≤r\right)→ℙ(W≤r),\text{ where $W∼N(0,1)$.}\]$\operatorname{cov}(R,S)=𝔼[RS]=∫rsf(r,s)=2∫_{-∞}^∞\frac14se^{-{|s|}}\mathrm{~d}s=0$, so $R,S$ are independent.\begin{multline*} ℙ\left(\frac{1}{\sqrt{n\operatorname{var}(R)}}\sum_{k=1}^n R_n≤r,\frac{1}{\sqrt{n\operatorname{var}(S)}}\sum_{k=1}^n S_n≤s\right)\\=ℙ\left(\frac{1}{\sqrt{n\operatorname{var}(R)}}\sum_{k=1}^n R_n≤r\right)ℙ\left(\frac{1}{\sqrt{n\operatorname{var}(S)}}\sum_{k=1}^n S_n≤s\right)\\→ℙ(W≤r)ℙ(Z≤s)=ℙ(W≤r,Z≤s)\text{, where }W,Z\stackrel{\text{iid}}∼N(0,1) \end{multline*}
- $f_R(r)=∫ f(r,s)\mathrm{~d}s=2∫_0^∞\frac14e^{-s}\mathrm{~d}s=\frac12$ for $r∈[-1,1]$; otherwise $f_R(r)=0$
- $\left|\frac{∂(R,S)}{∂(X,Y)}\right|=\begin{vmatrix}1&-1\\1&1\end{vmatrix}=2$. By the transformation formula, $(X,Y)$ has joint pdf\[f_{X,Y}(x,y)=2f_{R,S}(T(X,Y))=\begin{cases}\frac{1}{2} e^{-{|x+y|}} & \text { if }{|x-y|}≤1 \\ 0 & \text { otherwise. }\end{cases}\]
- Fix $y$ then $f_{X,Y}(x,y)>0⇔x∈[y-1, y+1]$. (horizontal section of the region)
If ${|y|}≥\frac12$, $x+y≥0$ for $x∈[y-1, y+1]$,\begin{aligned} f_Y(y) & =\int_{y-1}^{y+1} \frac{1}{2} e^{-{|x+y|}} d x\\ & =\frac{1}{2}\left(e-e^{-1}\right) e^{-2{|y|}} \end{aligned}If ${|y|}≤\frac12$, $x+y$ changes sign at $x=-y$,\begin{aligned} f_Y(y) &=\int_{y-1}^{-y}\frac{1}{2} e^{x+y} d x+\int_{-y}^{y+1}\frac{1}{2} e^{-x-y} d x\\ & =1-e^{-1}\cosh (2 y) \end{aligned} - In b(iii) we found cov$(R,S)=0$,$$\operatorname{cov}(X, Y)=\operatorname{cov}\left(\frac{R+S}2, \frac{S-R}2\right)=\frac{1}{4}(\operatorname{var}S-\operatorname{var}R)=\frac14(2-\frac13)=\frac{5}{12}$$correlation coefficient\[\frac{\operatorname{cov}(X, Y)}{\sqrt{\operatorname{var}(X)\operatorname{var}(Y)}}=\frac{\frac5{12}}{\sqrt{\frac7{12}\frac7{12}}}=\frac57\]
- Let $X_1,X_2,…$ be i.i.d. random variables with mean $μ$ and variance $σ^2∈(0,∞)$.
- $i$ is recurrent$⇔ℙ_i(X_n=i\text{ for some }n≥1)=1⇔ℙ_i(\inf\{n≥1:X_n=i\}<∞)=1$
$i$ is positive recurrent$⇔m_i=𝔼_i(\inf\{n≥1:X_n=i\})<∞$ - Let $V_i(n)$ be the number of visits to state $i$ before time $n$. Then for any initial distribution, ${V_i(n)\over n}→\frac1{m_i}$ almost surely.
- $i$ is recurrent$⇔ℙ_i(X_n=i\text{ for some }n≥1)=1⇔ℙ_i(\inf\{n≥1:X_n=i\}<∞)=1$
- $(Y_n,n≥0)$ satisfy Markov property: For any $t_0$, the distribution of $(Y_n, n>n_0)$ is independent of $(Y_n, n ≤ n_0)$.
Total number of balls is $N$, so $Y_n$ has $N+1$ states $0,…,N$. If $Y_n=k$, four casesWW→WW $Y_{n+1}=k$ $\frac kN\frac{k-1}{N-1}$ WB→BB $Y_{n+1}=k-1$ $\frac kN\frac{N-k}{N-1}$ BW→WW $Y_{n+1}=k+1$ $\frac{N-k}N\frac{k}{N-1}$ BB→BB $Y_{n+1}=k$ $\frac{N-k}N\frac{N-k-1}{N-1}$ - $0,N$ are absorbing states (aperiodic, recurrent)
The rest form a communicating class, since $p_{i, i-1}=p_{i, i+1}>0$ for $1 ≤ i ≤ N-1$
This is transient, since it has positive probability to escape to $0,N$ - For π to be stationary, we need\begin{eqnarray*}π_0&=&π_0+\frac1Nπ_1⇒π_1=0\\π_N&=&π_N+\frac1Nπ_{N-1}⇒π_{N-1}=0\\π_i&=&p_{i-1,i}π_{i-1}+p_{i,i}π_i+p_{i+1,i}π_{i+1}⇒π_i=0\text{ for }0<i<N\end{eqnarray*}So $(λ, 0,0, …, 0,1-λ), 0 ≤ λ ≤ 1$ are stationary distributions.
- $(Y_n,n≥0)$ satisfy Markov property: For any $t_0$, the distribution of $(Y_n, n>n_0)$ is independent of $(Y_n, n ≤ n_0)$.
- For π to be stationary, we need\begin{split}
π_0&=π_0p_{0,0}+π_1p_{1,0}\\
π_N&=π_{N-1}p_{N-1,N}+π_Np_{N,N}\\
π_j&=π_{j-1} p_{j-1, j}+π_j p_{j, j}+π_{j+1} p_{j+1, j}\text{ for }0 ≤ j ≤ N
\end{split}
We check the uniform distribution $π_i=1 /(N+1), 0 ≤ i ≤ N$ is a solution.\begin{array}l
\frac1{N+1}=\frac1{N+1}\frac{N}{N+1}+\frac1{N+1}\frac1{N+1}\\
\frac1{N+1}=\frac1{N+1}\frac1{N+1}+\frac1{N+1}\frac{N}{N+1}\\
\frac1{N+1}=\frac1{N+1}\frac{k}{N} \frac{N-k+1}{N+1}+\frac1{N+1}\left(\frac{N-k}{N} \frac{N-k}{N+1}+\frac{k}{N} \frac{k}{N+1}\right)+\frac1{N+1} \frac{N-k}{N} \frac{k+1}{N+1}\text{ for }0 ≤ j ≤ N\end{array}
This chain is irreducible: $p_{0,1}>0;p_{N,N-1}>0;p_{i, i-1}=p_{i, i+1}>0\text{ for }1 ≤ i ≤ N-1$.
By uniqueness theorem, uniform distribution is the only stationary distribution. - Let $i$ be the number of consecutive visits to $k$. By the Markov property,\begin{split}&ℙ(Z_{n+i}≠k,Z_{n+i-1}=k,…,Z_{n+1}=k|Z_n=k)\\&=ℙ(Z_{n+i}≠k|Z_{n+i-1}=k)…ℙ(Z_{n+1}=k|Z_n=k)\\&=(1-p_{k,k})p_{k,k}^{i-1}\end{split}so $i$ has geometric distribution with success probability $1-p_{k,k}$
- By Theorem 6.3(b) $1/π_k=m_k$, $m_k$ is the mean return time to state $k$.
By the Law of Total Probability, \begin{split} N+1&=𝔼_k\left[V_1^{(k)} ∣ Z_1≠k\right] ℙ(Z_1≠k)+𝔼_k\left[V_1^{(k)} ∣ Z_1=k\right] ℙ\left(Z_1=k\right)\\&=x (1-p_{k,k})+p_{k,k}\\&⇒x=1+\frac N{1-p_{k,k}} \end{split} - I don't understand
- For π to be stationary, we need\begin{split}
π_0&=π_0p_{0,0}+π_1p_{1,0}\\
π_N&=π_{N-1}p_{N-1,N}+π_Np_{N,N}\\
π_j&=π_{j-1} p_{j-1, j}+π_j p_{j, j}+π_{j+1} p_{j+1, j}\text{ for }0 ≤ j ≤ N
\end{split}
We check the uniform distribution $π_i=1 /(N+1), 0 ≤ i ≤ N$ is a solution.\begin{array}l
\frac1{N+1}=\frac1{N+1}\frac{N}{N+1}+\frac1{N+1}\frac1{N+1}\\
\frac1{N+1}=\frac1{N+1}\frac1{N+1}+\frac1{N+1}\frac{N}{N+1}\\
\frac1{N+1}=\frac1{N+1}\frac{k}{N} \frac{N-k+1}{N+1}+\frac1{N+1}\left(\frac{N-k}{N} \frac{N-k}{N+1}+\frac{k}{N} \frac{k}{N+1}\right)+\frac1{N+1} \frac{N-k}{N} \frac{k+1}{N+1}\text{ for }0 ≤ j ≤ N\end{array}
This chain is irreducible: $p_{0,1}>0;p_{N,N-1}>0;p_{i, i-1}=p_{i, i+1}>0\text{ for }1 ≤ i ≤ N-1$.
PREVIOUSProbability paper 2020