solution
stringlengths 442
6.58k
| problem
stringlengths 373
2.16k
|
---|---|
(a) We compare the truth table with the indicators.
$$
\begin{array}{ccc}
E_1 & E_2 & A \\
\hline
\text{T} & \text{T} & \text{T} \\
\text{T} & \text{F} & \text{T} \\
\text{F} & \text{T} & \text{T} \\
\text{F} & \text{F} & \text{F}
\end{array}
\qquad\qquad
\begin{array}{ccc}
I_1 & I_2 & I_A \\
\hline
1 & 1 & 1 \\
1 & 0 & 1 \\
0 & 1 & 1 \\
0 & 0 & 0
\end{array}
$$
Hence, $I_A$ is the indicator for $A$.
So,
$$
\begin{aligned}
\operatorname{Pr}(A) &= \operatorname{E}(I_A) \\
&= \operatorname{E}\{1 - (1 - I_1)(1 - I_2)\} \\
&= \operatorname{E}\{1 - 1 + I_1 + I_2 - I_1 I_2\} \\
&= \operatorname{E}(I_1) + \operatorname{E}(I_2) - \operatorname{E}(I_1 I_2) \\
&= \operatorname{Pr}(E_1) + \operatorname{Pr}(E_2) - \operatorname{Pr}(E_1 \land E_2)
\end{aligned}
$$
(b) The indicator for $B$ is
$$
I_B = 1 - (1 - I_1)(1 - I_2)(1 - I_3)
$$
So,
$$
\begin{aligned}
\operatorname{Pr}(B) &= \operatorname{E}(I_B) \\
&= \operatorname{E}\{1 - (1 - I_1)(1 - I_2)(1 - I_3)\} \\
&= \operatorname{E}\{1 - 1 + I_1 + I_2 + I_3 - I_1 I_2 - I_1 I_3 - I_2 I_3 + I_1 I_2 I_3\} \\
&= \operatorname{E}(I_1) + \operatorname{E}(I_2) + \operatorname{E}(I_3) - \operatorname{E}(I_1 I_2) - \operatorname{E}(I_1 I_3) - \operatorname{E}(I_2 I_3) + \operatorname{E}(I_1 I_2 I_3) \\
&= \operatorname{Pr}(E_1) + \operatorname{Pr}(E_2) + \operatorname{Pr}(E_3) - \operatorname{Pr}(E_1 \land E_2) - \operatorname{Pr}(E_1 \land E_3) - \operatorname{Pr}(E_2 \land E_3) \\
&\quad + \operatorname{Pr}(E_1 \land E_2 \land E_3)
\end{aligned}
$$ | Let $E_1, E_2, E_3$ be events. Let $I_1, I_2, I_3$ be the corresponding indicators such that $I_1 = 1$ if $E_1$ occurs and $I_1 = 0$ otherwise.
(a) Let $I_A = 1 - (1 - I_1)(1 - I_2)$. Verify that $I_A$ is the indicator for the event $A$, where $A = (E_1 \lor E_2)$ (that is, "$E_1$ or $E_2$"), and show that
$$
\operatorname{Pr}(A) = \operatorname{Pr}(E_1) + \operatorname{Pr}(E_2) - \operatorname{Pr}(E_1 \land E_2)
$$
where $(E_1 \land E_2)$ means "$E_1$ and $E_2$".
(b) Find a formula, in terms of $I_1, I_2, I_3$, for $I_B$, the indicator for the event $B$, where $B = (E_1 \lor E_2 \lor E_3)$, and derive a formula for $\operatorname{Pr}(B)$ in terms of $\operatorname{Pr}(E_1), \operatorname{Pr}(E_2), \operatorname{Pr}(E_3), \operatorname{Pr}(E_1 \land E_2), \operatorname{Pr}(E_1 \land E_3), \operatorname{Pr}(E_2 \land E_3), \operatorname{Pr}(E_1 \land E_2 \land E_3)$. |
Let $R$ be “rain”, $\bar{R}$ be “dry”, $P$ be “rain predicted”
We require $\operatorname*{Pr}(R\mid P)$ . By Bayes’ theorem, this is
$$
{\begin{array}{l l l}{\operatorname*{Pr}(R\mid P)}&{=}&{{\frac{\operatorname*{Pr}(R)\operatorname*{Pr}(P\mid R)}{\operatorname*{Pr}(R)\operatorname*{Pr}(P\mid R)+\operatorname*{Pr}({\bar{R}})\operatorname*{Pr}(P\mid{\bar{R}})}}}\\ &{=}&{{\frac{{\frac{1}{3}}\times{\frac{3}{4}}}{{\frac{1}{3}}\times{\frac{3}{4}}+{\frac{2}{3}}\times{\frac{2}{5}}}}}\\ &{=}&{{\frac{\frac{3}{4}}{\frac{3}{4}}}={\frac{15}{15+16}}={\frac{15}{31}}}\\ &{=}&{0.4839}\end{array}}
$$ | In a certain place it rains on one third of the days. The local evening newspaper attempts to predict whether or not it will rain the following day. Three quarters of rainy days and three fifths of dry days are correctly predicted by the previous evening’s paper. Given that this evening’s paper predicts rain, what is the probability that it will actually rain tomorrow? |
Let $$ D $$ be "1st defective item is 13th to be made."
We require $$\operatorname{Pr}(X = i \mid D)$$ for $$i = 0, \dots, 5$$.
Now,
$$
\operatorname{Pr}(D \mid X = i) = \left(1 - \frac{i}{100}\right)^{12} \left(\frac{i}{100}\right)
$$
and
$$
\operatorname{Pr}(X = i) = \frac{1}{6}.
$$
By Bayes' theorem,
$$
\operatorname{Pr}(X = i \mid D) = \frac{\operatorname{Pr}(X = i) \operatorname{Pr}(D \mid X = i)}{\sum_{j=0}^{5} \operatorname{Pr}(X = j) \operatorname{Pr}(D \mid X = j)}
$$
and, since $$\operatorname{Pr}(X = i) = \frac{1}{6}$$ for all $$i$$,
$$
\operatorname{Pr}(X = i \mid D) = \frac{\operatorname{Pr}(D \mid X = i)}{\sum_{j=0}^{5} \operatorname{Pr}(D \mid X = j)}.
$$
So we obtain the following table:
$$
\begin{array}{|c|c|c|}
\hline
i & \operatorname{Pr}(D \mid X = i) & \operatorname{Pr}(X = i \mid D) \\
\hline
0 & 0.00 & 0.0000 \\
1 & 0.01 & 0.0915 \\
2 & 0.02 & 0.1620 \\
3 & 0.03 & 0.2148 \\
4 & 0.04 & 0.2529 \\
5 & 0.05 & 0.2788 \\
\hline
\end{array}
$$
The sum of $$\operatorname{Pr}(D \mid X = j)$$ over $$j = 0, \ldots, 5$$ is approximately $$0.0969$$. | A machine is built to make mass-produced items. Each item made by the machine has a probability $p$ of being defective. Given the value of $p$ , the items are independent of each other. Because of the way in which the machines are made, $p$ could take one of several values. In fact $p=X/100$ where $X$ has a discrete uniform distribution on the interval [0, 5]. The machine is tested by counting the number of items made before a defective is produced. Find the conditional probability distribution of $X$ given that the first defective item is the thirteenth to be made. |
Let $D$ be “ $2$ out of 5 imperfect.” Let $M$ be “machine defective” and let $M$ be “machine not defective.”
We require $\operatorname*{Pr}(M\mid D)$ .
Now
$$
\mathrm{Pr}(D\mid M)=\left(\begin{array}{l}{{5}}\\ {{2}}\end{array}\right){0.2^{2}0.8^{3}}
$$
and
$$
\mathrm{Pr}(D\mid\bar{M})=\left(\begin{array}{l}{{5}}\\ {{2}}\end{array}\right)0.1^{2}0.9^{3}.
$$
By Bayes’ theorem
$$
{\begin{array}{r c l}{\operatorname*{Pr}(M\mid D)}&{=}&{\operatorname*{Pr}(M)\operatorname*{Pr}(D\mid M)}\\ &&{\operatorname*{Pr}(M)\operatorname*{Pr}(D\mid M)+\operatorname*{Pr}({\bar{M}})\operatorname*{Pr}(D\mid{\bar{M}})}\\ &&{=}&{{\frac{{\frac{3}{5}}}{{\frac{2}{5}}\,{\left(\begin{array}{l}{5}\\ {2}\end{array}\right)}\,0.2^{20}.8^{3}}}}\\ &&{=}&{{\frac{2\,\times\,0.2^{2}\times\,0.8^{3}}{2\,\times\,0.2^{2}\times\,0.8^{3}+3\,\times\,0.1^{2}\times\,0.9^{3}}}}\\ &&{=}&{{\frac{\,0.84096}{0.04096}}}\\ &&{=}&{0.6519}\end{array}}
$$ | There are five machines in a factory. Of these machines, three are working properly and two are defective. Machines which are working properly produce articles each of which has independently a probability of 0.1 of being imperfect. For the defective machines this probability is 0.2.
A machine is chosen at random and five articles produced by the machine are examined. What is the probability that the machine chosen is defective given that, of the five articles examined, two are imperfect and three are perfect? |
Prior probabilities: $\operatorname*{Pr}(A)=0.6$ ,$\mathrm{Pr}(B)=0.2$ ,$\operatorname*{Pr}(C)=0.2$ .Likelihood: $\operatorname*{Pr}(6\mid A)=1/6$ ,$\operatorname*{Pr}(6\mid B)=0.8$ ,$\operatorname*{Pr}(6\mid C)=0.04$ .Prior $\times$ likelihood:
$$
\begin{array}{r}{\operatorname*{Pr}(A)\operatorname*{Pr}(6\mid A)=0.6\times1/6}\\ {\operatorname*{Pr}(B)\operatorname*{Pr}(6\mid B)=0.2\times0.8}\\ {\operatorname*{Pr}(C)\operatorname*{Pr}(6\mid C)=0.2\times0.04}\end{array}
$$
$$
\operatorname*{Pr}(6)=0.1+0.16+0.008=0.268.
$$
$$
\operatorname*{Pr}(B\mid6)={\frac{0.16}{0.268}}=\underline{{0.597}}.
$$ | A dishonest gambler has a box containing 10 dice which all look the same. However there are actually three types of dice.
There are 6 dice of type $A$ which are fair dice with $\operatorname*{Pr}(6\mid A)=1/6$ (where $\operatorname*{Pr}(6\mid A)$ is the probability of getting a 6 in a throw of a type $A$ die). There are 2 dice of type $B$ which are biassed with $\operatorname*{Pr}(6\mid B)=0.8$ .There are 2 dice of type $C$ which are biassed with $\operatorname*{Pr}(6\mid C)=0.04$ .
The gambler takes a die from the box at random and rolls it. Find the conditional probability that it is of type $B$ given that it gives a 6. |
Prior $\times$ likelihood:
$$
\begin{array}{r l}{\operatorname*{Pr}(x)\operatorname*{Pr}(y\mid x)}&{=\begin{array}{r l}{{\binom{5}{3}}\cup6^{z}0.4^{4^{5-x}}\left(\begin{array}{l}{x}\\ {y}\end{array}\right)0.3^{y_{0}}.7^{x-y}}\\ &{=\begin{array}{r l}{{\frac{5!}{x!(5-x)!^{5}y!(x-y)!}}\left({\frac{0.6\times0.7}{0.4}}\right)^{x}0.6^{6}0.4^{5}\left(\frac{0.3}{0.7}\right)^{y}}\\ &{\propto\begin{array}{r l}{1.05^{-{\frac{1}{6}}}\left(x-y\right)!}\\ {{\sqrt{\frac{1}{6}}}\left(x-y\right)!(x-y)!}\end{array}}\\ {\operatorname*{Pr}(x=0)\operatorname*{Pr}(y=2\mid x=0)}\end{array}}\\ {\operatorname*{Pr}(x=1)\operatorname*{Pr}(y=2\mid x=1)}&{=\begin{array}{r l}{0}\\ {{\mathrm{Pr}}(x=2)\operatorname*{Pr}(y=2\mid x=2)}\end{array}}\\ {\operatorname*{Pr}(x=3)\operatorname*{Pr}(y=2\mid x=3)\ \ \alpha\cdot0.5^{8}881}\\ {\operatorname*{Pr}(x=4)\operatorname*{Pr}(y=2\mid x=4)\ \alpha\cdot0.60075}\\ {\operatorname*{Pr}(x=5)\operatorname*{Pr}(y=2\mid x=5)}&{\ 0.21271}\\ {\operatorname*{Pr}(y=2)\ \operatorname*{Pr}(y=2)\ \operatorname*{Cr}\left(1-2\right)\ \alpha\cdot1.58303}\end{array}}\end{array}
$$
$$
\operatorname*{Pr}(x=j\mid y=2)={\frac{\operatorname*{Pr}(x=j)\operatorname*{Pr}(y=2\mid x=j)}{\operatorname*{Pr}(y=2)}}
$$ | In a forest area of Northern Europe there may be wild lynx. At a particular time the number $X$ of lynx can be between 0 and 5 with
$$
\operatorname*{Pr}(X=x)={\left(\begin{array}{l}{5}\\ {x}\end{array}\right)}\,0.6^{x}0.4^{5-x}\quad(x=0,\ldots,5).
$$
A survey is made but the lynx is difficult to spot and, given that the number present is $x$ ,the number $Y$ observed has a probability distribution with
$$
\operatorname*{Pr}(Y=y\mid X=x)=\left\{\begin{array}{l l}{{\left(\begin{array}{l}{{x}}\\ {{y}}\end{array}\right)0.3^{y}0.7^{x-y}}}&{{(0\leq y\leq x)}}\\ {{0}}&{{(x<y)}}\end{array}\right..
$$
Find the conditional probability distribution of $X$ given that $Y=2$ .
(That is, find $\operatorname*{Pr}(X=0\mid Y=2),\ldots,\operatorname*{Pr}(X=5\mid Y=2))$ |
Notation:
$M$ :Migration started $\bar{M}$ :Migration not started $W$ :No fish in 60 minutes
Prior: $\mathrm{Pr}(M)=0.4,\ \mathrm{Pr}(\bar{M})=0.6$
(a) Likelihood:
$$
{\begin{array}{l l l}{\operatorname*{Pr}(W\mid M)}&{=}&{e^{-60/20}=e^{-3}=0.04979}\\ {\operatorname*{Pr}(W\mid{\bar{M}})}&{=}&{1}\end{array}}
$$
Prior $\times$ likelihood:
$$
{\begin{array}{r l r l}{\operatorname*{Pr}(M)\operatorname*{Pr}(W\mid M)}&{=0.4\times0.04979=}&{0.01991}\\ {\operatorname*{Pr}({\bar{M}})\operatorname*{Pr}(W\mid{\bar{M}})}&{=0.6\times1=}&{\underline{{0.6}}}\\ {\operatorname*{Pr}(W)}&{=0.01991+0.6=}&{0.61991}\end{array}}
$$
$$
\operatorname*{Pr}({\bar{M}}\mid W)={\frac{0.6}{0.61991}}=\underline{{0.9679}}
$$
(b) We require
$$
\begin{array}{r c l}{{\frac{0.6}{0.4e^{-t/20}+0.6}}}&{{=}}&{{0.9}}\\ {{0.6}}&{{=}}&{{0.36e^{-t/20}+0.6\times0.9}}\\ {{0.06}}&{{=}}&{{0.36e^{-t/20}}}\\ {{e^{-t/20}}}&{{=}}&{{1/6}}\\ {{t/20}}&{{=}}&{{\log6}}\\ {{t}}&{{=}}&{{20\log6=35.8\mathrm{~minute}}}\end{array}
$$ | A particular species of fish makes an annual migration up a river. On a particular day there is a probability of 0.4 that the migration will start. If it does then an observer will have to wait $T$ minutes before seeing a fish, where $T$ has an exponential distribution with mean 20 (i.e. an exponential(0.05) distribution). If the migration has not started then no fish will be seen.
(a) Find the conditional probability that the migration has not started given that no fish has been seen after one hour.
(b) How long does the observer have to wait without seeing a fish to be 90% sure that the migration has not started? |
(a) i.
$$
\begin{array}{r c l}{{\displaystyle\int_{0}^{\infty}f^{(0)}(\lambda)~d\lambda}}&{{=}}&{{\displaystyle k_{0}\left\{\int_{0}^{\infty}e^{-\lambda}~d\lambda+\int_{0}^{\infty}\lambda e^{-\lambda}~d\lambda\right\}}}\\ {{\displaystyle}}&{{=}}&{{\displaystyle k_{0}\{1+1\}=2k_{0}}}\end{array}
$$
Hence $k_{0}=1/2$ .
ii.
$$
\begin{array}{l l l}{\displaystyle\mathrm{E}_{0}(\lambda)=\int_{0}^{\infty}\lambda f^{(0)}(\lambda)\ d\lambda}&{=}&{\displaystyle\frac{1}{2}\left\{\int_{0}^{\infty}\lambda e^{-\lambda}\ d\lambda+\int_{0}^{\infty}\lambda^{2}e^{-\lambda}\ d\lambda\right\}}\\ &{=}&{\displaystyle\frac{1}{2}\{1+2\}=\frac{3}{2}=1.5}\end{array}
$$
iii.
$$
\begin{array}{l l l}{\displaystyle\mathrm{E}_{0}(\lambda^{2})=\int_{0}^{\infty}\lambda^{2}f^{(0)}(\lambda)\ d\lambda}&{=}&{\displaystyle\frac{1}{2}\left\{\int_{0}^{\infty}\lambda^{2}e^{-\lambda}\ d\lambda+\int_{0}^{\infty}\lambda^{3}e^{-\lambda}\ d\lambda\right\}}\\ {\displaystyle}&{=}&{\displaystyle\frac{1}{2}\{2+6\}=\frac{8}{2}}\end{array}
$$
So
$$
\operatorname{var}_{0}(\lambda)={\frac{8}{2}}-\left({\frac{3}{2}}\right)^{2}={\frac{16-9}{4}}={\frac{7}{4}}
$$
and
$$
\operatorname{std.dev}_{0}(\lambda)={\sqrt{\frac{7}{4}}}={\frac{\sqrt{7}}{2}}=1.323.
$$
(b) i. Likelihood
$$
L=\prod_{i=1}^{n}{\frac{e^{-\lambda\lambda^{x_{i}}}}{x_{i}!}}={\frac{e^{-n\lambda}\lambda^{\sum x_{i}}}{\prod x_{i}!}}={\frac{e^{-n\lambda}\lambda^{S}}{\prod x_{i}!}}
$$
where
$$
S=\sum_{i=1}^{n}x_{i}.
$$
ii. Posterior density proportional to
$$
\begin{array}{l l l}{{f^{(0)}(\lambda)L}}&{{\propto}}&{{(1+\lambda)e^{-\lambda}e^{-n\lambda}\lambda^{S}}}\\ {{}}&{{\propto}}&{{e^{-(n+1)\lambda}\lambda^{S}+e^{-(n+1)\lambda}\lambda^{S+1}}}\end{array}
$$
The posterior density is
$$
f^{(1)}(\lambda)=k_{1}\left\{e^{-(n+1)\lambda}\lambda^{S}+e^{-(n+1)\lambda}\lambda^{S+1}\right\}
$$
where
$$
\int_{0}^{\infty}f^{(1)}(\lambda)\ d\lambda=1=k_{1}\left\{\frac{\Gamma(S+1)}{(n+1)^{S+1}}+\frac{\Gamma(S+2)}{(n+1)^{S+2}}\right\}.
$$
Hence
$$
k_{1}=\left\{\frac{\Gamma(S+1)}{(n+1)^{S+1}}+\frac{\Gamma(S+2)}{(n+1)^{S+2}}\right\}^{-1}
$$
and
$$
f^{(1)}(\lambda)=\left\{\frac{\Gamma(S+1)}{(n+1)^{S+1}}+\frac{\Gamma(S+2)}{(n+1)^{S+2}}\right\}^{-1}\left\{e^{-(n+1)\lambda}\lambda^{S}+e^{-(n+1)\lambda}\lambda^{S+1}\right\}.
$$
iii. Posterior mean
$$
\begin{array}{r c l}{\operatorname{E}_{1}(\lambda)}&{=}&{\displaystyle\int_{0}^{\infty}\lambda f^{(1)}(\lambda)~d\lambda}\\ &{=}&{k_{1}\left\{\int_{0}^{\infty}\lambda^{S+1}e^{-(n+1)\lambda}~d\lambda+\int_{0}^{\infty}\lambda^{S+2}e^{-(n+1)\lambda}~d\lambda\right\}}\\ &{=}&{k_{1}\left\{\frac{\Gamma(S+2)}{(n+1)^{S+2}}+\frac{\Gamma(S+3)}{(n+1)^{S+3}}\right\}}\\ &{=}&{\displaystyle\frac{\left\{(\frac{S+1}{(n+1)}+\frac{(S+1)(S+2)}{(n+1)^{2}}\right\}}{\left\{1+\frac{(S+1)}{(n+1)}\right\}}}\end{array} | We are interested in the mean, $\lambda$ , of a Poisson distribution. We have a prior distribution for $\lambda$ with density
$$
f^{(0)}(\lambda)=\left\{\begin{array}{c c}{{0}}&{{(\lambda\leq0)}}\\ {{k_{0}(1+\lambda)e^{-\lambda}}}&{{(\lambda>0)}}\end{array}\right..
$$
(a) i. Find the value of $k_{0}$ .ii. Find the prior mean of $\lambda$ .iii. Find the prior standard deviation of $\lambda$ .
(b) We observe data $x_{1},\ldots,x_{n}$ where, given $\lambda$ , these are independent observations from the Poisson $(\lambda)$ distribution. i. Find the likelihood. ii. Find the posterior density of $\lambda$ .iii. Find the posterior mean of $\lambda$ . |
(a) i.
$$
\begin{array}{r c l}{{\displaystyle\int_{0}^{1}f^{(0)}(\theta)~d\theta}}&{{=}}&{{\displaystyle k_{0}\left\{\int_{0}^{1}\theta^{2}(1-\theta)~d\theta+\int_{0}^{1}\theta(1-\theta)^{2}~d\theta\right\}}}\\ {{\displaystyle}}&{{=}}&{{\displaystyle k_{0}\left\{\frac{\Gamma(3)\Gamma(2)}{\Gamma(5)}+\frac{\Gamma(2)\Gamma(3)}{\Gamma(5)}\right\}}}\\ {{\displaystyle}}&{{=}}&{{\displaystyle2k_{0}\frac{\Gamma(3)\Gamma(2)}{\Gamma(5)}=2k_{0}\frac{2!1!}{4!}=\frac{k_{0}}{6}}}\end{array}
$$
Hence $k_{0}=6$ .
ii. Prior mean
$$
{\begin{array}{r c l}{\operatorname{E}_{0}(\theta)}&{=}&{\displaystyle\int_{0}^{1}\theta f^{(0)}(\theta)\ d\theta=k_{0}\left\{\int_{0}^{1}\theta^{3}(1-\theta)\ d\theta+\int_{0}^{1}\theta^{2}(1-\theta)^{2}\ d\theta\right\}}\\ &{=}&{k_{0}\left\{{\frac{\Gamma(4)\Gamma(2)}{\Gamma(6)}}+{\frac{\Gamma(3)\Gamma(3)}{\Gamma(6)}}\right\}}\\ &{=}&{k_{0}\left\{{\frac{3!1!+2!2!}{5!}}\right\}=k_{0}\left\{{\frac{6+4}{120}}\right\}={\frac{1}{2}}}\end{array}}
$$
iii.
$$
\begin{array}{l l l}{\mathrm{E}_{0}(\theta^{2})}&{=}&{\displaystyle\int_{0}^{1}\theta^{2}f^{(0)}(\theta)~d\theta}\\ &{=}&{\displaystyle k_{0}\left\{\int_{0}^{1}\theta^{4}(1-\theta)~d\theta+\int_{0}^{1}\theta^{3}(1-\theta)^{2}~d\theta\right\}}\\ &{=}&{\displaystyle k_{0}\left\{\frac{\Gamma(5)\Gamma(2)}{\Gamma(7)}+\frac{\Gamma(4)\Gamma(3)}{\Gamma(7)}\right\}}\\ &{=}&{\displaystyle k_{0}\left\{\frac{4!11+3!2!}{6!}\right\}=\left\{\frac{24+12}{5\times24}\right\}=\frac{3}{10}}\end{array}
$$
Hence
$$
\operatorname{var}_{0}(\theta)={\frac{3}{10}}-\left({\frac{1}{2}}\right)^{2}={\frac{6-5}{20}}={\frac{1}{20}}
$$
and
$$
\operatorname{std.dev}_{0}(\theta)={\frac{1}{\sqrt{20}}}=0.2236.
$$
(b) i. Likelihood
$$
L=\left(\begin{array}{c}{{n}}\\ {{x}}\end{array}\right)\theta^{x}(1-\theta)^{n-x}.
$$
ii. Posterior density $f^{(1)}(\theta)$ proportional to $f^{(0)}(\theta)L$ . Hence
$$
\begin{array}{l c l}{{f^{(1)}(\theta)}}&{{=}}&{{k_{1}\left\{\theta^{2}(1-\theta)+\theta(1-\theta)^{2}\right\}\theta^{x}(1-\theta)^{n-x}}}\\ {{}}&{{=}}&{{k_{1}\left\{\theta^{x+2}(1-\theta)^{n-x+1}+\theta^{x+1}(1-\theta)^{n-x+2}\right\}}}\end{array}
$$
Now
$$
\begin{array}{r c l}{{\displaystyle\int_{0}^{1}f^{(1)}(\theta)~d\theta=1}}&{{=}}&{{\displaystyle k_{1}\left\{\int_{0}^{1}\theta^{x+2}(1-\theta)^{n-x+1}~d\theta+\int_{0}^{1}\theta^{x+1}(1-\theta)^{n-x+2}~d\theta\right\}}}\\ {{}}&{{=}}&{{\displaystyle k_{1}\left\{\frac{\Gamma(x+3)\Gamma(n-x+2)}{\Gamma(n+5)}+\frac{\Gamma(x+2)\Gamma(n-x+3)}{\Gamma(n+5)}\right\}.}}\end{array}
$$
Hence
$$
k_{1}=\left\{\frac{\Gamma(x+3)\Gamma(n-x+2)}{\Gamma(n+5)}+\frac{\Gamma(x+2)\Gamma(n-x+3)}{\Gamma(n+5)}\right\}^{-1}
$$
and
$$
\begin{array}{r c l}{{f^{(1)}(\theta)}}&{{=}}&{{\displaystyle\left\{\frac{\Gamma(x+3)\Gamma(n-x+2)}{\Gamma(n+5)}+\frac{\Gamma(x+2)\Gamma(n-x+3)}{\Gamma(n+5)}\right\}^{-1}}}\\ {{}}&{{}}&{{\times\left\{\theta^{x+2}(1-\theta)^{n-x+1}+\theta^{x+1}(1-\theta)^{n-x+2}\right\}.}}\end{array}
$$
iii. Posterior mean
$$
\begin{array}{r l}{\mathrm{E}_{\mathrm{G}}(\theta)}&{=\int_{0}^{\theta}e^{\theta}(1)\theta\,\mathrm{d}\theta}\\ &{=\;h_{\mathrm{G}}\left\{\int_{0}^{\theta+\pi+1}(1-\theta)^{\theta+\pi+1}\,\theta\int_{0}^{\theta+\pi+2}(1-\theta)^{\theta+\pi+2}\,d\theta\right\}}\\ &{=\;h_{\mathrm{G}}\left\{\frac{\left[\sum_{i}\theta+1\right](\pi+\pi+2)}{1+\theta}+\frac{\left[\sum_{i}\theta\right](\pi+3)\pi}{\Gamma(1+\theta)}\right\}}\\ &{=\;\frac{\left\{\frac{2\pi+\pi+\pi}{1+\theta}\right\}\,\left\{\pi+\frac{2\pi+\pi}{2}\right\}\,+\,\frac{\left[\sum_{i}\theta\right](\pi+\pi+\pi)}{\Gamma(1+\theta)}\right\}}\\ &{-\;\frac{\left\{\frac{2\pi+\pi}{1+\theta}\right\}\,\left\{\pi+\frac{2\pi}{2}\right\}\,+\,\frac{\left[\sum_{i}\theta\right](\pi+\pi+\pi)}{\Gamma(1+\theta)}\right\}}\\ &{=\;\frac{1}{h_{\mathrm{G}}}\left\{\frac{\left[\sum_{i}\theta+1\right](\pi+2)}{h_{\mathrm{G}}}+\frac{2\pi}{2}\right\}\Gamma(\pi+3)\Gamma(\pi-\pi+3)\right\}}\\ &{=\;\frac{1}{h_{\mathrm{G}}}\left\{\frac{\Gamma(\pi+4)}{2}+\Gamma(\pi+3)(\pi-\pi+2)\right\}}\\ &{=\;\frac{1}{h_{\mathrm{G}}}\left\{\frac{\left[\sum_{i}\theta+1\right]}{h_{\mathrm{G}}}+\frac{2}{3}\right\}\left(\pi+\frac{2\pi}{2}\right)(h_{\mathrm{G}}-\pi+2)\right\}}\\ &{=\;\frac{\pi+\pi}{h_{\mathrm{G}}}\left\{\frac{\left(\pi+3\right)\pi}{h_{\mathrm{G}}}+\frac{2}{3}\right\}\left(\pi+3\right)(\pi-\pi+2)}\\ &{=\;\frac{\pi+\pi}{h_{\mathrm{G}}}\frac{\left[\left(\pi+3\right)\pi-6\pi+2\right]}{\left(\pi+2\right)+(\pi+3)\pi}}\\ &{=\;\frac{\
$$ | We are interested in the parameter, $\theta$ , of a binomial $(n,\theta)$ distribution. We have a prior distribution for $\theta$ with density
$$
f^{(0)}(\theta)=\left\{\begin{array}{c c}{k_{0}\{\theta^{2}(1-\theta)+\theta(1-\theta)^{2}\}}&{(0<\theta<1)}\\ {0}&{(\mathrm{otherwise})}\end{array}\right..
$$
(a) i. Find the value of $k_{0}$ .
ii. Find the prior mean of $\theta$ .iii. Find the prior standard deviation of $\theta$ .(b) We observe $x$ , an observation from the binomia $(n,\theta)$ distribution.
i. Find the likelihood.
ii. Find the posterior density of $\theta$ .
iii. Find the posterior mean of $\theta$ . |
(a) i.
$$
\int_{0}^{1}f^{(0)}(\theta)\ d\theta=k_{0}\int_{0}^{1}\theta^{2}(1-\theta)^{3}\ d\theta=k_{0}{\frac{\Gamma(3)\Gamma(4)}{\Gamma(7)}}.
$$
Hence
$$
k_{0}={\frac{\Gamma(7)}{\Gamma(3)\Gamma(4)}}={\frac{6!}{2!3!}}={\frac{6\times5\times4}{2}}={\underline{{60}}}.
$$
ii. Prior mean
$$
\begin{array}{l l l}{{\mathrm{E}_{0}(\theta)}}&{{=}}&{{\displaystyle\int_{0}^{1}\theta f^{(0)}(\theta)~d\theta=k_{0}\int_{0}^{1}\theta^{3}(1-\theta)^{3}~d\theta}}\\ {{}}&{{=}}&{{\displaystyle\frac{\Gamma(4)\Gamma(4)}{\Gamma(8)}\frac{\Gamma(7)}{\Gamma(3)\Gamma(4)}=\frac{3}{7}=\underline{{{0.4286}}}.}}\end{array}
$$
iii.
$$
\begin{array}{l l l}{{\mathrm{E}_{0}(\theta^{2})}}&{{=}}&{{\displaystyle\int_{0}^{1}\theta^{2}f^{(0)}(\theta)\ d\theta=k_{0}\int_{0}^{1}\theta^{4}(1-\theta)^{3}\ d\theta}}\\ {{}}&{{=}}&{{\displaystyle\frac{\Gamma(5)\Gamma(4)}{\Gamma(9)}\frac{\Gamma(7)}{\Gamma(3)\Gamma(4)}=\frac{4\times3}{8\times7}=\frac{3}{14}}}\end{array}
$$
Hence
$$
\operatorname{var}_{0}(\theta)={\frac{3}{14}}-\left({\frac{3}{7}}\right)^{2}={\frac{21-18}{98}}={\frac{3}{98}}
$$
and
$$
\mathrm{std.dev}_{0}(\theta)=\sqrt{\frac{3}{98}}=\underline{{0.1750}}.
$$
(b) i. Likelihood
$$
L(\theta)=\left(\begin{array}{l}{n}\\ {x}\end{array}\right)\theta^{x}(1-\theta)^{n-x}.
$$
ii. Posterior density
$$
f^{(1)}(\theta)\propto f^{(0)}(\theta)L(\theta)=k_{1}\theta^{x+2}(1-\theta)^{n-x+3}.
$$
Now
$$
\int_{0}^{1}f^{(1)}(\theta)\ d\theta=1=k_{1}\int_{0}^{1}\theta^{x+2}(1-\theta)^{n-x+3}\ d\theta n=k_{1}{\frac{\Gamma(x+3)\Gamma(n-x+4)}{\Gamma(n+7)}}.
$$
Hence
$$
k_{1}=\frac{\Gamma(n+7)}{\Gamma(x+3)\Gamma(n-x+4)}
$$
and
$$
f^{(1)}(\theta)=\frac{\Gamma(n+7)}{\Gamma(x+3)\Gamma(n-x+4)}\theta^{x+2}(1-\theta)^{n-x+3}\~~~~~~~~~(0<\theta<1).
$$
iii. Posterior mean
$$
\begin{array}{l l l}{\displaystyle\mathrm{E}_{1}(\theta)}&{=}&{\displaystyle\int_{0}^{1}\theta f^{(1)}(\theta)\;d\theta=k_{1}\int_{0}^{1}\theta^{x+3}(1-\theta)^{n-x+3}\;d\theta}\\ {\displaystyle}&{=}&{\displaystyle\frac{\Gamma(n+7)}{\Gamma(x+3)\Gamma(n-x+4)}\frac{\Gamma(x+4)\Gamma(n-x+4)}{\Gamma(n+8)}=\frac{x+3}{n+7}}\end{array}
$$ | We are interested in the parameter $\theta$ , of a binomial $(n,\theta)$ distribution. We have a prior distribution for $\theta$ with density
$$
f^{(0)}(\theta)=\left\{\begin{array}{c c}{k_{0}\theta^{2}(1-\theta)^{3}}&{(0<\theta<1)}\\ {0}&{(\mathrm{otherwise})}\end{array}\right..
$$
(a) i. Find the value of $k_{0}$ .ii. Find the prior mean of $\theta$ .iii. Find the prior standard deviation of $\theta$ .
(b) We observe $x$ , an observation from the binomia $(n,\theta)$ distribution.
i. Find the likelihood.
ii. Find the posterior density of $\theta$ .
iii. Find the posterior mean of $\theta$ . |
(a) i. Value of $k_{0}$ :
$$
\int_{0}^{\infty}\lambda^{3}e^{-\lambda}\ d\lambda=\int_{0}^{\infty}\lambda^{4-1}e^{\lambda}\ d\lambda=\Gamma(4)=3!=6
$$
Hence
$$
k_{0}=\frac{1}{6}.
$$
ii. Prior mean:
(1 mark)
$$
\begin{array}{l l l}{\displaystyle\mathrm{E}_{0}(\lambda)}&{=}&{\displaystyle\int_{0}^{\infty}\lambda k_{0}\lambda^{3}e^{-\lambda}\ d\lambda}\\ {\displaystyle}&{=}&{k_{0}\int_{0}^{\infty}\lambda^{5-1}e^{-\lambda}\ d\lambda}\\ {\displaystyle}&{=}&{k_{0}\Gamma(5)=\frac{4!}{3!}=\frac{4}{3!}}\end{array}
$$
iii. Prior std.dev.:
(1 mark)
$$
\begin{array}{l l l}{\displaystyle\mathrm{E}_{0}(\lambda^{2})}&{=}&{\displaystyle\int_{0}^{\infty}\lambda^{2}k_{0}\lambda^{3}e^{-\lambda}\ d\lambda}\\ {\displaystyle}&{=}&{k_{0}\int_{0}^{\infty}\lambda^{6-1}e^{-\lambda}\ d\lambda}\\ {\displaystyle}&{=}&{k_{0}\Gamma(6)=\frac{5!}{3!}=20}\end{array}
$$
Hence $\mathrm{var}_{0}(\lambda)=\mathrm{E}_{0}(\lambda^{2})-[\mathrm{E}_{0}(\lambda)]^{2}=20-16=4$ and prior sd is
$$
{\sqrt{4}}=2.
$$
(b) i. Likelihood:
$$
L=\prod_{i=1}^{n}{\frac{e^{-\lambda}\lambda^{x_{i}}}{x_{i}!}}={\frac{e^{-n\lambda}\lambda{\sum x_{i}}}{\prod x_{i}!}}
$$
(2 marks)
ii. Posterior density proportional to
(1 mark)
$$
\lambda^{3}e^{-\lambda}\times e^{-n\lambda}\lambda\sum{x_{i}}
$$
That is
$$
\lambda\sum x_{i}{+}4{-}1_{e^{-(n+1)\lambda}}
$$
Now
$$
\int_{0}^{\infty}\lambda^{a-1}e^{-b\lambda}~d\lambda=\frac{\Gamma(a)}{b^{a}}
$$
Hence the posterior density is
$$
\frac{(n+1){\sum x_{i}+4}}{\Gamma(\sum x_{i}+4)}\lambda{\sum x_{i}}{+}4{-}1}e^{-(n+1)\lambda}
$$
iii. To find the posterior mean, increase the power of $\lambda$ by $^{1}$ and integrate. Posterior mean is
$$
{\frac{(n+1)\sum x_{i}+4}{\Gamma(\sum x_{i}+4)}}{\frac{\Gamma(\sum x_{i}+5)}{(n+1)\sum x_{i}+5}}={\frac{\sum x_{i}+4}{n+1}}
$$ | We are interested in the parameter $\lambda$ of a Poisson(λ) distribution. We have a prior distribution for $\lambda$ with density
$$
f^{(0)}(\lambda)=\left\{\begin{array}{l l}{{0}}&{{(\lambda<0)}}\\ {{k_{0}\lambda^{3}e^{-\lambda}}}&{{(\lambda\geq0)}}\end{array}\right..
$$
(a) i. Find the value of $k_{0}$ .
ii. Find the prior mean of $\lambda$ .
iii. Find the prior standard deviation of $\lambda$ .
(b) We observe $x_{1},\ldots,x_{n}$ which are independent observations from the Poisson(λ) distribution.
i. Find the likelihood function.
ii. Find the posterior density of $\lambda$ .
iii. Find the posterior mean of $\lambda$ . |
(a) i. The expression given is proportional to the prior density since
$$
\begin{array}{r c l}{{\displaystyle\frac{\Gamma(6)}{\Gamma(2)\Gamma(4)}}}&{{=}}&{{\displaystyle\frac{5!}{3!}=30}}\\ {{\mathrm{and~}\;\displaystyle\frac{\Gamma(2)}{\Gamma(1)\Gamma(1)}}}&{{=}}&{{1}}\end{array}
$$
Now we only need to show that
$$
\int_{0}^{1}\frac{1}{2}\left\{\frac{\Gamma(6)}{\Gamma(2)\Gamma(4)}\theta^{2-1}(1-\theta)^{4-1}+\frac{\Gamma(2)}{\Gamma(1)\Gamma(1)}\theta^{1-1}(1-\theta)^{1-1}\right\}\;d\theta=1
$$
and this follows since
$$
\begin{array}{r c l}{{\displaystyle\int_{0}^{1}\theta^{2-1}(1-\theta)^{4-1}\ d\theta}}&{{=}}&{{\displaystyle\frac{\Gamma(2)\Gamma(4)}{\Gamma(6)}}}\\ {{\mathrm{and}}}&{{\displaystyle\int_{0}^{1}\theta^{1-1}(1-\theta)^{1-1}\ d\theta}}&{{=}}&{{\displaystyle\frac{\Gamma(1)\Gamma(1)}{\Gamma(2)}.}}\end{array}
$$
(1 mark)
ii. Prior mean:
$$
\begin{array}{r c l}{{\mathrm{E}_{0}(\theta)}}&{{=}}&{{\displaystyle\int_{0}^{1}\theta f^{(0)}(\theta)~d\theta}}\\ {{}}&{{=}}&{{\displaystyle\frac{1}{2}\int_{0}^{1}\left\{\frac{\Gamma(6)}{\Gamma(2)\Gamma(4)}\theta^{3-1}(1-\theta)^{4-1}+\frac{\Gamma(2)}{\Gamma(1)\Gamma(1)}\theta^{2-1}(1-\theta)^{1-1}\right\}~d\theta}}\\ {{}}&{{=}}&{{\displaystyle\frac{1}{2}\left\{\frac{\Gamma(6)}{\Gamma(2)\Gamma(4)}\frac{\Gamma(3)\Gamma(4)}{\Gamma(7)}+\frac{\Gamma(2)}{\Gamma(1)\Gamma(1)}\frac{\Gamma(2)\Gamma(1)}{\Gamma(3)}\right\}}}\\ {{}}&{{=}}&{{\displaystyle\frac{1}{2}\left\{\frac{2}{6}+\frac{1}{2}\right\}=\frac{5}{12}=\frac{0.4167}{0.4167}}}\end{array}
$$
(2 marks)
iii. Prior std. dev.:
$$
\begin{array}{r c l}{{\mathrm{E}_{0}(\theta^{2})}}&{{=}}&{{\displaystyle\int_{0}^{1}\theta^{2}f^{(0)}(\theta)~d\theta}}\\ {{}}&{{=}}&{{\displaystyle\frac{1}{2}\int_{0}^{1}\left\{\frac{\Gamma(6)}{\Gamma(2)\Gamma(4)}\theta^{4-1}(1-\theta)^{4-1}+\frac{\Gamma(2)}{\Gamma(1)\Gamma(1)}\theta^{3-1}(1-\theta)^{1-1}\right\}~d\theta}}\\ {{}}&{{=}}&{{\displaystyle\frac{1}{2}\left\{\frac{\Gamma(6)}{\Gamma(2)\Gamma(4)}\frac{\Gamma(4)\Gamma(4)}{\Gamma(8)}+\frac{\Gamma(2)}{\Gamma(1)\Gamma(1)}\frac{\Gamma(3)\Gamma(1)}{\Gamma(4)}\right\}}}\\ {{}}&{{=}}&{{\displaystyle\frac{1}{2}\left\{\frac{3\times2}{7\times6}+\frac{1}{3}\right\}=\frac{5}{21}}}\end{array}
$$
Hence
$$
\operatorname{var}_{0}(\theta)={\frac{5}{21}}-\left({\frac{5}{12}}\right)^{2}=0.06448
$$
so the prior std. dev. is
$$
{\sqrt{0.06448}}=\underline{{0.2539}}.
$$
(b) i. Likelihood:
$$
L=\left(\begin{array}{c}{{10}}\\ {{4}}\end{array}\right)\theta^{4}(1-\theta)^{6}
$$
(1 mark)
ii. Posterior density:
Posterior propto Prior $\times$ Likelihood
$$
f^{(1)}(\theta)=k_{1}\left\{\frac{\Gamma(6)}{\Gamma(2)\Gamma(4)}\theta^{6-1}(1-\theta)^{10-1}+\frac{\Gamma(2)}{\Gamma(1)\Gamma(1)}\theta^{5-1}(1-\theta)^{7-1}\right\}
$$
$$
\begin{array}{l c l}{\displaystyle\int_{0}^{1}{f^{(1)}(\theta)\;d\theta}}&{=}&{\displaystyle k_{1}\left\{\frac{\Gamma(6)}{\Gamma(2)\Gamma(4)}\frac{\Gamma(6)\Gamma(10)}{\Gamma(16)}+\frac{\Gamma(2)}{\Gamma(1)\Gamma(1)}\frac{\Gamma(5)\Gamma(7)}{\Gamma(12)}\right\}}\\ &{=}&{\displaystyle k_{1}\left\{\frac{5\times4\times5\times4\times3\times2}{15\times14\times13\times12\times11\times10}+\frac{4\times3\times2}{11\times10\times9\times8\times7}\right\}}\\ &{=}&{\displaystyle k_{1}\left\{\frac{2}{7\times13\times3\times11}+\frac{1}{11\times10\times3\times7}\right\}}\\ &{=}&{\displaystyle\frac{k_{1}}{3\times11\times7}\left\{\frac{2}{13}+\frac{1}{10}\right\}}\\ &{=}&{\displaystyle\frac{k_{1}}{3\times11\times7}\left\{\frac{33}{130}\right\}=\frac{k_{1}}{7\times130}}\end{array}
$$
Hence $k_{1}=7\times130=910$ .
Posterior density:
$$
f^{(1)}(\theta)=910\left\{20\theta^{6-1}(1-\theta)^{10-1}+\theta^{5-1}(1-\theta)^{7-1}\right\}
$$
(2 marks)
iii. Posterior mean:
$$
\begin{array}{l l l}{\mathrm{E}_{1}(\theta)}&{=}&{\displaystyle\int_{0}^{1}\theta\,f^{(1)}(\theta)\;d\theta}\\ &{=}&{\displaystyle910\int_{0}^{1}\left\{20\theta^{7-1}(1-\theta)^{10-1}+\theta^{6-1}(1-\theta)^{7-1}\right\}\;d\theta}\\ &{=}&{\displaystyle910\left\{20\frac{\Gamma(7)\Gamma(10)}{\Gamma(17)}+\frac{\Gamma(6)\Gamma(7)}{\Gamma(13)}\right\}}\\ &{=}&{\displaystyle910\left\{\frac{20\times6\times5\times4\times3\times2}{16\times15\times14\times13\times12\times11\times10}+\frac{5\times4\times3\times2}{12\times11\times10\times9\times8\times7}\right\}}\\ &{=}&{\displaystyle910\left\{\frac{1}{14\times13\times11\times2}+\frac{1}{11\times9\times8\times7}\right\}=0.3914}\end{array}
$$
(2 marks)
iv. Plot: suitable R commands:
$>$ theta<-seq(0,1,0.01)
> prior<-0.5\*(20\*theta\*((1-theta)^3)+1)
> post<-910\*(20\*(theta^5) $^*$ ((1-theta)^9)+(theta^4)\*((1-theta)^6))
> plot(theta,post,type $=$ "l",xlab $\lvert=$ expression(theta),ylab $=$ "Density")
> lines(theta,prior,lty $^{\prime}{=}2$ ) | In a fruit packaging factory apples are examined to see whether they are blemished. A sample of $n$ apples is examined and, given the value of a parameter $\theta$ , representing the proportion of aples which are blemished, we regard $x$ , the number of blemished apples in the sample, as an obervation from the binomial $(n,\ \theta)$ distribution. The value of $\theta$ is unknown.
Our prior density for $\theta$ is
$$
f^{(0)}(\theta)=\left\{\begin{array}{l l}{k_{0}(20\theta(1-\theta)^{3}+1)}&{(0\leq\theta\leq1)}\\ {0}&{\mathrm{(otherwise)}}\end{array}\right..
$$
(a) i. Show that, for $0\leq\theta\leq1$ , the prior density can be written as
$$
f^{(0)}(\theta)=\frac{1}{2}\left\{\frac{\Gamma(6)}{\Gamma(2)\Gamma(4)}\theta^{2-1}(1-\theta)^{4-1}+\frac{\Gamma(2)}{\Gamma(1)\Gamma(1)}\theta^{1-1}(1-\theta)^{1-1}\right\}.
$$
ii. Find the prior mean of $\theta$ .
iii. Find the prior standard deviation of $\theta$ .i. Find the likelihood function.
ii. Find the posterior density of $\theta$ .
iii. Find the posterior mean of $\theta$ .
iv. Use R to plot a graph showing both the prior and posterior densities of $\theta$ . (Hint: It is easier to get the vertical axis right if you plot the posterior density and then superimpose the prior density, rather than the other way round.) |
(a) In the prior $a=1.5$ and $b=1.5$ . So the mean is
$$
{\frac{a}{a+b}}={\frac{1.5}{3.0}}=\underline{{0.5}}.
$$
The variance is
$$
\frac{a b}{(a+b)^{2}(a+b+1)}=\frac{1.5\times1.5}{3^{2}\times4}=\frac{1}{16}
$$
so the standard deviation is
$$
\frac{1}{4}=\underline{{0.25}}.
$$
(b) Using R the prior probability that $\theta<0.6$ is 0.62647.
> pbeta(0.6,1.5,1.5) [1] 0.62647
(c) The likelihood is
$$
\left(\begin{array}{l}{50}\\ {37}\end{array}\right)\theta^{37}(1-\theta)^{13}.
$$
(d) The prior density is proportional to θ1.5−1(1 θ)1.5−1 The likelihood is proportional to $\theta^{37}(1-\theta)^{13}$ Hence the posterior density is proportional to $\theta^{38.5-1}(1-\theta)^{14.5-1}$ The posterior distribution is beta(38.5, 14.5).
(e) In the posterior $a=38.5$ and $b=14.5$ . So the mean is
$$
{\frac{a}{a+b}}={\frac{38.5}{53.0}}=\underline{{0.7264}}.
$$
The variance is
$$
\frac{a b}{(a+b)^{2}(a+b+1)}=\frac{38.5\times14.5}{53^{2}\times54}=3.6803\times10^{-3}
$$
so the standard deviation is 0.06067.
(f) See Figure 1.
> theta<-seq(0.01,0.99,0.01)
> prior<-dbeta(theta,1.5,1.5)
> posterior<-dbeta(theta,38.5,14.5)
$>$ plot(theta,posterior,xlab $=$ expression(theta),ylab $=$ "Density",type $:=$ "l")
> lines(theta,prior,lty $^{=2}$ )
(g) Using R the posterior probability that $\theta<0.6$ is 0.02490528.
> pbeta(0.6,38.5,14.5) [1] 0.02490528 | In a small survey, a random sample of 50 people from a large population is selected. Each person is asked a question to which the answer is either “Yes” or “No.” Let the proportion in the population who would answer “Yes” be $\theta$ . Our prior distribution for $\theta$ is a beta(1.5, 1.5) distribution. In the survey, 37 people answer “Yes.”
(a) Find the prior mean and prior standard deviation of $\theta$ .
(b) Find the prior probability that $\theta<0.6$ .
(c) Find the likelihood.
(d) Find the posterior distribution of $\theta$ .
(e) Find the posterior mean and posterior standard deviation of $\theta$ .
(f) Plot a graph showing the prior and posterior probability density functions of $\theta$ on the same axes.
(g) Find the posterior probability that $\theta<0.6$ .
Notes:
The probability density function of a beta $(a,b)$ distribution is $f(x)=k x^{a-1}(1-x)^{b-1}$ where $k$ is a constant.
If $X\sim\operatorname{beta}(a,b)$ then the mean of $X$ is
$$
\operatorname{E}(X)={\frac{a}{a+b}}
$$
and the variance of $X$ is
$$
\operatorname{var}(X)={\frac{a b}{(a+b+1)(a+b)^{2}}}.
$$
If $X\sim\operatorname{beta}(a,b)$ then you can use a command such as the following in $\mathrm{R}$ to find $\mathrm{Pr}(X<c)$ .
pbeta(c,a,b)
To plot the prior and posterior probability densities you may use R commands such as the following.
theta<-seq(0.01,0.99,0.01)
prior<-dbeta(theta,a,b)
posterior<-dbeta(theta,c,d)
plot(theta,posterior,xlab=expression(theta),ylab $=$ "Density",type="l")
lines(theta,prior,lty=2) |
(a) The mean is $a/b=3$ and the variance is $a/b^{2}=4$ . So
$$
{\frac{9}{4}}={\frac{a^{2}/b^{2}}{a/b^{2}}}=a,
$$
giving $a=2.25$ and
$$
b={\frac{2.25}{3}}={\underline{{0.75}}}.
$$
![](images/ce21dc76a2bc377f32c1f9c2b8fb0fca5f59fc7055f44e5ba53ebd58e566a13d.jpg)
Figure 1: Prior (dashes) and posterior (solid) pdfs for Question 1.
(b) Using R the prior probability that $\lambda<2.0$ is 0.3672305. > pgamma(2,2.25,0.75) [1] 0.3672305
(c) The likelihood is
$$
\begin{array}{r c l}{\displaystyle\prod_{i=1}^{n}\frac{e^{-\lambda_{i}}\lambda_{i}^{x_{i}}}{x_{i}!}}&{=}&{\displaystyle\frac{e^{-\sum\lambda_{i}}\prod\lambda_{i}^{x_{i}}}{\prod x_{i}!}}\\ &{=}&{e^{-\lambda n/10000}\lambda^{S}\displaystyle\frac{\prod(n_{i}/10000)^{x_{i}}}{\prod x_{i}!}}\end{array}
$$
where $n=\sum n_{i}=1231663$ and $\textstyle S=\sum x_{i}=19$ .This is proportional to
$$
e^{-12.31663\lambda}\lambda^{19}.
$$
(d) The prior density is proportional to λ2.25−1e−0.75λ The likelihood is proportional to 19 −12.316663λ Hence the posterior density is proportional to $\lambda^{21.25-1}e^{-13.06663\lambda}$ The posterior distribution is gamma(21.25, 13.06663).
(e) In the posterior $a=21.25$ and $b=13.06663$ . So the mean is
$$
{\frac{a}{b}}={\frac{21.25}{13.06663}}={\underline{{1.6262}}}.
$$
The standard deviation is
$$
{\frac{\sqrt{a}}{b}}={\frac{\sqrt{21.25}}{13.06663}}=\underline{{0.3528}}.
$$
![](images/067c741e103f7e55722976040067d7fb2701920db42a31a659ef22152fe388e2.jpg)
Figure 2: Prior (dashes) and posterior (solid) pdfs for Question 2.
(f) See Figure 2.
> lambda<-seq(0.05,8.0,0.05)
> prior<-dgamma(lambda,2.25,0.75)
> posterior<-dgamma(lambda,21.25,13.06663)
> plot(lambda,posterior,xlab $=$ expression(lambda),ylab $=$ "Density",type="l")
> lines(lambda,prior,lty=2)
(g) Using R the posterior probability that $\lambda<2.0$ is 0.8551274.
> pgamma(2,21.25,13.06663) [1] 0.8551274 | The populations, $n_{i}$ , and the number of cases, $x_{i}$ , of a disease in a year in each of six districts are given in the table below.
<html><body><table><tr><td>Population n</td><td>Cases</td><td></td></tr><tr><td>120342</td><td></td><td>2</td></tr><tr><td>235967</td><td></td><td>5</td></tr><tr><td>243745</td><td></td><td>3</td></tr><tr><td>197452</td><td></td><td>5</td></tr><tr><td>276935</td><td></td><td>3</td></tr><tr><td>157222</td><td></td><td>1</td></tr></table></body></html>
We suppose that the number $X_{i}$ in a district with population $n_{i}$ is a Poisson random variable with mean $n_{i}\lambda/100000$ . The number in each district is independent of the numbers in other districts, given the value of $\lambda$ . Our prior distribution for $\lambda$ is a gamma distribution with mean 3.0 and standard deviation 2.0.
(a) Find the parameters of the prior distribution.
(b) Find the prior probability that $\lambda<2.0$ .
(c) Find the likelihood.
(d) Find the posterior distribution of $\lambda$ .
(e) Find the posterior mean and posterior standard deviation of $\lambda$ .
(f) Plot a graph showing the prior and posterior probability density functions of $\lambda$ on the same axes.
(g) Find the posterior probability that $\lambda<2.0$ .Notes:
The probability density function of a $\mathrm{gamma}(a,b)$ distribution is $f(x)\,=\,k x^{a-1}\exp(-b x)$ where $k$ is a constant.
If $X\sim\operatorname{gamma}(a,b)$ then the mean of $X$ is $\operatorname{E}(X)=a/b$ and the variance of $X$ is $\operatorname{var}(X)=$ $a/(b^{2})$ .If $X\sim\operatorname{gamma}(a,b)$ then you can use a command such as the following in $\mathrm{R}$ to find $\mathrm{Pr}(X<c)$ .
pgamma(c,a,b)
To plot the prior and posterior probability densities you may use R commands such as the following.
lambda<-seq(0.00,5.00,0.01)
prior<-dgamma(lambda,a,b)
posterior<-dgamma(lambda,c,d)
plot(lambda,posterior,xlab=expression(lambda),ylab="Density",type="l")
lines(lambda,prior,lty=2) |
Since the prior distribution is uniform the prior density is a constant. Therefore the posterior density is proportional to the likelihood. the likelihood is
$$
L=\prod_{i=1}^{4}\prod_{j=1}^{4}p_{i j}^{n_{i j}}
$$
where $n_{i j}$ is the observed number of transitions from rock $i$ to rock $j$ .
The posterior density is therefore
$$
\prod_{i=1}^{4}f_{i}^{(1)}(p_{i1},p_{i2},p_{i3},p_{i4})
$$
where
$$
f_{i}^{(1)}(p_{i1},p_{i2},p_{i3},p_{i4})=k_{1i}p_{i1}^{n_{i1}}p_{i2}^{n_{i2}}p_{i3}^{n_{i3}}p_{i4}^{n_{i4}}
$$
is the posterior density of $p_{i1},p_{i2},p_{i3},p_{i4}$ .
Since
$$
\int\int\int_{R}f_{i}^{(1)}(p_{i1},p_{i2},p_{i3},p_{i4})~d p_{i1}~d p_{i2}~d p_{i3}=1,
$$
we must have
$$
\begin{array}{l c l}{{k_{1i}^{-1}}}&{{=}}&{{\displaystyle\int\displaystyle\int\int_{R}p_{i1}^{n_{i1}}p_{i2}^{n_{i2}}p_{i3}^{n_{i3}}p_{i4}^{n_{i4}}~d p_{i1}~d p_{i2}~d p_{i3}}}\\ {{}}&{{=}}&{{\displaystyle\frac{\Gamma(n_{i1}+1)\Gamma(n_{i2}+1)\Gamma(n_{i3}+1)\Gamma(n_{i4}+1)}{\Gamma(N_{i}+4)}}}\end{array}
$$
where $N_{i}\,=\,n_{i1}+n_{i2}+n_{i3}+n_{i4}$ and the integrals are taken over the region $R$ in which $\left(p_{i1},p_{i2},p_{i3},p_{i4}\right)$ must lie and $p_{i4}=1-p_{i1}-p_{i2}-p_{i3}$ .
Now, the posterior mean of $p_{i1}$ , for example, is
$$
\begin{array}{l l l}{{\mathrm{E}^{(1)}(p_{i1})}}&{{=}}&{{\displaystyle\int\int\int_{R}p_{i1}f_{i}^{(1)}(p_{i1},p_{i2},p_{i3},p_{i4})~d p_{i1}~d p_{i2}~d p_{i3}}}\\ {{}}&{{=}}&{{\displaystyle\int\int\int_{R}k_{i1}p_{i1}^{n_{i1}+1}p_{i2}^{n_{i2}}p_{i3}^{n_{i3}}p_{i4}^{n_{i4}}~d p_{i1}~d p_{i2}~d p_{i3}}}\\ {{}}&{{=}}&{{k_{1i}k_{2i}}}\end{array}
$$
where
$$
k_{2i}=\frac{\Gamma(n_{i1}+2)\Gamma(n_{i2}+1)\Gamma(n_{i3}+1)\Gamma(n_{i4}+1)}{\Gamma(N_{i}+5)}.
$$
So
$$
\begin{array}{r c l}{\mathrm{E}^{(1)}(p_{i1})}&{=}&{\displaystyle\frac{\Gamma(n_{i1}+2)\Gamma(n_{i2}+1)\Gamma(n_{i3}+1)\Gamma(n_{i4}+1)}{\Gamma(n_{i1}+1)\Gamma(n_{i2}+1)\Gamma(n_{i3}+1)\Gamma(n_{i4}+1)}\frac{\Gamma(N_{i}+4)}{\Gamma(N_{i}+5)}}\\ &{=}&{\displaystyle\frac{(n_{i1}+1)\Gamma(n_{i1}+1)}{\Gamma(n_{i1}+1)}\frac{\Gamma(N_{i}+4)}{(N_{i}+4)\Gamma(N_{i}+4)}}\\ &{=}&{\displaystyle\frac{n_{i1}+1}{N_{i}+4}}\end{array}
$$
In general
$$
\mathrm{E}^{(1)}(p_{i j})=\frac{n_{i j}+1}{N_{i}+4}.
$$
The table of posterior means is as follows. | Geologists note the type of rock at fixed vertical intervals of six inches up a quarry face. At this quarry there are four types of rock. The following model is adopted.
The conditional probability that the next rock type is $j$ given that the present type is $i$ and given whatever has gone before is $p_{i j}$ . Clearly $\textstyle\sum_{j=1}^{4}p_{i j}=1$ for all $i$ .The following table gives the observed (upwards) transition frequencies.
<html><body><table><tr><td colspan="2"></td><td colspan="3">To rock</td><td rowspan="2">4</td></tr><tr><td colspan="2">From rock</td><td>1</td><td>2</td><td>3</td></tr><tr><td rowspan="3"></td><td>1</td><td>56</td><td>13</td><td>24</td><td>4</td></tr><tr><td>2</td><td>15</td><td>93</td><td>22</td><td>35</td></tr><tr><td>3</td><td>20</td><td>25</td><td>153</td><td>11</td></tr><tr><td></td><td>4</td><td>6</td><td>35</td><td>11</td><td>44</td></tr></table></body></html>
Our prior distribution for the transition probabilities is as follows. For each $i$ we have a uniform distribution over the space of possible values of $p_{i1},\ldots,p_{i4}$ . The prior distribution of $p_{i1},\ldots,p_{i4}$ is independent of that for $\,\!p_{k1},\ldots,p_{k4}$ for $i\neq k$ .
Find the matrix of posterior expectations of the transition probabilities.
Note that the integral of $x_{1}^{\prime\iota_{1}}x_{2}^{\prime\iota_{2}}x_{3}^{\prime\iota_{3}}x_{4}^{\prime\iota_{4}}$ over the region such that $x_{j}>0$ for $j=1,\dots,4$ and $\textstyle\sum_{j=1}^{4}x_{j}=1$ , where $\pi_{1},\ldots,\pi_{4}$ are positive is
$$
\begin{array}{c}{{\displaystyle\int_{0}^{1}x_{1}^{n_{1}}\int_{0}^{1-x_{1}}x_{2}^{n_{2}}\int_{0}^{1-x_{1}-x_{2}}x_{3}^{n_{3}}(1-x_{1}-x_{2}-x_{3})^{n_{4}}.d x_{3}.d x_{2}.d x_{1}}}\\ {{\displaystyle=\frac{\Gamma(n_{1}+1)\Gamma(n_{2}+1)\Gamma(n_{3}+1)\Gamma(n_{4}+1)}{\Gamma(n_{1}+n_{2}+n_{3}+n_{4}+4)}}}\end{array}
$$ |
(a) Prior mean:
$$
{\frac{a}{b}}=16,
$$
Prior variance:
$$
{\frac{a}{b^{2}}}=64.
$$
Hence $\underline{{a=4}}$ and $b=0.25$ .
(1 mark)
(b) From the data $\textstyle s=\sum_{i=1}^{20}x_{i}=400$ Prior density proportional to
$$
\lambda^{4-1}e^{-0.25\lambda}
$$
Likelihood proportional to
$$
\prod_{i=1}^{20}e^{-\lambda}\lambda^{x_{i}}=e^{-20\lambda}\lambda^{s}=\lambda^{400}e^{-20\lambda}
$$
Hence posterior density proportional to
$$
\lambda^{404-1}e^{-20.25\lambda}
$$
This is a gamma(404, 20.25) distribution.
(1 mark)
(c) R commands:
> sales<-c(14,19,14,21,22,33,15,13,16,19,27,22,27,21,16,25,14,23,22,17)> lambda<-seq(10,25,0.0
> prior<-dgamma(lambda,4,0.25)
> post<-dgamma(lambda,404,20.25)
> pdf("probs309q7.pdf",height ${}^{,=5}$ )
> plot(lambda,post,type $=$ "l",xlab $\lvert=$ expression(lambda),ylab $=$ "Density")
> lines(lambda,prior,lty $^{=2}$ )
> abline(0,0)
> dev.off()
The graph is shown in Figure 3.
(2 marks)
(d) R commands and result:
> hpdgamma(0.95,404,20.25)
II Lower Upper Difference"
1 9.17326 21.6109 1
2 13.7599 21.6109 1
3 16.0532 21.611 0.998601
4 17.1999 21.6305 0.86782
5 17.7732 21.7447 0.402608
6 18.0599 21.951 -0.0807799
7 17.9165 21.8224 0.189252
8 17.9882 21.878 0.062461
9 18.024 21.9119 -0.00689586
10 18.0061 21.8944 0.0283184
11 18.0151 21.903 0.0108487
12 18.0195 21.9074 0.00201123
13 18.0218 21.9096 -0.00243356
14 18.0207 21.9085 -0.00020898
15 18.0201 21.908 0.000901669
16 18.0204 21.9082 0.000346481
17 18.0205 21.9084 6.87842e-05
[1] 18.02052 21.90838
>
The $95\%$ hpd interval is $\underline{{18.02<\lambda<21.91}}$ | The numbers of sales of a particular item from an Internet retail site in each of 20 weeks are recorded. Assume that, given the value of a parameter $\lambda$ , these numbers are independent observations from the Poisson(λ) distribution.
Our prior distribution for $\lambda$ is a gamma $(a,b)$ distribution.
(a) Our prior mean and standard deviation for $\lambda$ are 16 and 8 respectively. Find the values of $a$ and $b$ .
(b) The observed numbers of sales are as follows. 14 19 14 21 22 33 15 13 16 19 27 22 27 21 16 25 14 23 22 17 Find the posterior distribution of $\lambda$ .
(c) Using R or otherwise, plot a graph showing both the prior and posterior probability density functions of $\lambda$ .
(d) Using R or otherwise, find a 95% posterior hpd interval for $\lambda$ . (Note: The R function hpdgamma is available from the Module Web Page). |
(a) Variance of beta $(a,b)$ :
$$
\frac{a b}{(a+b+1)(a+b)^{2}}
$$
Variance of $\mathrm{{beta}}(a,a)$ :
$$
{\frac{a^{2}}{(2a+1)(2a)^{2}}}={\frac{1}{4(2a+1)}}
$$
$$
{\frac{1}{4(2a+1)}}={\frac{1}{4^{2}}}\;\;\;\Rightarrow\;\;\;(2a+1)=4\;\;\Rightarrow\;\;\;{\underline{{a=1.5}}}
$$
(1 mark)
(b) Prior: beta(1.5, 1.5) Likelihood: $\theta^{21}(1-\theta)^{9}$ Posterior: beta(22.5, 10.5)
(1 mark)
(c) Posterior mean:
$$
{\frac{22.5}{22.5+10.5}}={\frac{22.5}{33}}={\underline{{0.6818}}}
$$
Posterior variance:
$$
\frac{22.5\times10.5}{34\times33^{2}}=0.006381
$$
Posterior std.dev.:
$$
\sqrt{0.006381}=\underline{{0.0799}}
$$
(1 mark)
(d) R commands:
> theta<-seq(0,1,0.005)
> prior<-dbeta(theta,1.5,1.5)
> post<-dbeta(theta,22.5,10.5)
> pdf("probs309q8.pdf",height $=5.$ )
> plot(theta,post,type $=$ "l",xlab $=$ expression(theta),ylab $=$ "Density")
> lines(theta,prior,lty $^{=2}$ )
> abline(0,0)
> dev.off()
The graph is shown in Figure 4.
(2 marks)
e) R commands and results:
![](images/dd2d96afdfacb9de692c68b9ae112033e2bebd4b5e85511bd2d99c2b6192bbe0.jpg)
Figure 4: Prior (dashes) and posterior (solid line) density functions for $\theta$ (Question 8).
> qbeta(0.025,22.5,10.5)
[1] 0.5161281
> qbeta(0.975,22.5,10.5)
[1] 0.8266448
The $95\%$ symmetric interval is $0.516<\theta<0.827$ . | In a medical experiment, patients with a chronic condition are asked to say which of two treatments, A, B, they prefer. (You may assume for the purpose of this question that every patient will express a preference one way or the other). Let the population proportion who prefer A be $\theta$ . We observe a sample of $n$ patients. Given $\theta$ , the $n$ responses are independent and the probability that a particular patient prefers A is $\theta$ .
Our prior distribution for $\theta$ is a beta $(a,a)$ distribution with a standard deviation of 0.25.
(a) Find the value of $a$ .
(b) We observe $n=30$ patients of whom 21 prefer treatment A. Find the posterior distribution of $\theta$ .
(c) Find the posterior mean and standard deviation of $\theta$ .
(d) Using R or otherwise, plot a graph showing both the prior and posterior probability density functions of $\theta$ .
(e) Using R or otherwise, find a symmetric $95\%$ posterior probability interval for $\theta$ . (Hint: The $R$ command qbeta(0.025,a,b) will give the $2.5\%$ point of a $b e t a(a,b)$ distribution). |
(a) Median
$$
e^{-\lambda m}=\frac12\quad\mathrm{so}\quad\lambda m=\log2\quad\mathrm{so}\quad m=\frac{\log2}{\lambda}.
$$
(1 mark)
(b) We have $\lambda=(\log2)/m$ so
$$
\begin{array}{l c l}{{k_{1}=\displaystyle\frac{\log2}{46.2}}}&{{=}}&{{\underline{{{0.0150}}}}}\\ {{k_{2}=\displaystyle\frac{\log2}{6.0}}}&{{=}}&{{\underline{{{0.1155}}}}}\end{array}
$$
(1 mark)
(c) Find $r$ :
$$
r=\frac{k_{2}}{k_{1}}=7.7.
$$
This is satisfied by $\nu=6$ . See R:
>$\mathtt{n u=5}$
> qchisq(0.95,nu)/qchisq(0.05,nu) [1] 9.664537
>$\tt n u{=}6$
$>$ qchisq(0.95,nu)/qchisq(0.05,nu) [1] 7.699473
Hence $a=\nu/2=\underline{{{3}}}$ .
(1 mark)
(d) Lower $5\%$ point of $\chi_{6}^{2}$ (i.e. $\mathrm{{gamma}(3,\ 1/2))}$ is 1.635383.
$>$ qchisq(0.05,6) [1] 1.635383
So
$$
{\frac{b}{1/2}}={\frac{1.635383}{0.0150}}=109.00\quad{\mathrm{so}}\quad{\frac{b=54.5}{}}.
$$
Prior distribution is gamma(3, 54.5).
(2 marks)
Prior density proportional to
$$
\lambda^{3-1}e^{-54.5\lambda}
$$
Likelihood:
$$
\prod_{i=1}^{25}\lambda e^{-\lambda t_{i}}=\lambda^{25}e^{-\lambda\sum t_{i}}=\lambda^{25}e^{-502\lambda}
$$
Posterior density proportional to
$$
\lambda^{28-1}e^{-556.5\lambda}
$$
This is a gamma(28, 556.5) distribution.
(1 mark)
(e) Using the relationship with the $\chi^{2}$ distribution:
$$
2\times556.5\lambda\sim\chi_{56}^{2}
$$
95% interval:
$37.21159<\chi_{56}^{2}<78.56716$
$$
\frac{37.21559}{2\times556.5}<\lambda<\frac{78.56716}{2\times556.5}
$$
(2 marks)
> qchisq(0.025,56) [1] 37.21159 > qchisq(0.975,56) [1] 78.56716 | The survival times, in months, of patients diagnosed with a severe form of a terminal illness are thought to be well modelled by an exponential $(\lambda)$ distribution. We observe the survival times of $n$ such patients. Our prior distribution for $\lambda$ is a $\mathrm{gamma}(a,b)$ distribution.
(a) Prior beliefs are expressed in terms of the median lifetime, ${\boldsymbol{r}}n$ . Find an expression for ${\boldsymbol{r}}n$ in terms of $\lambda$ .
(b) In the prior distribution, the lower 5% point for ${\boldsymbol{r}}n$ is 6.0 and the upper 5% point is 46.2. Find the corresponding lower and upper $5\%$ points for $\lambda$ . Let these be $k_{1},\ k_{2}$ respectively.
(c) Let $k_{2}/k_{1}=r$ . Find, to the nearest integer, the value of $\nu$ such that, in a $\chi_{\nu}^{2}$ distribution, the $95\%$ point divided by the 5% point is $r$ and hence deduce the value of $a$ .
(d) Using your value of $a$ and one of the percentage points for $\lambda$ , find the value of $b$ .
(e) We observe $n\,=\,25$ patients and the sum of the lifetimes is 502. Find the posterior distribution of $\lambda$ .
(f) Using the relationship of the gamma distribution to the $\chi^{2}$ distribution, or otherwise, find a symmetric $95\%$ posterior interval for $\lambda$ .
Note: The R command qchisq(0.025,nu) will give the lower 2.5% point of a $\chi^{2}$ distri
bution on nu degrees of freedom. |
(a) Prior distribution is Dirichlet(4,2,2,3).
So $A_{0}=4+2+2+3=11$ .The prior means are
$$
{\frac{a_{0,i}}{A_{0}}}.
$$
The prior variances are
$$
\frac{a_{0,i}}{(A_{0}+1)A_{0}}-\frac{a_{0,i}^{2}}{A_{0}^{2}(A_{0}+1)}.
$$
Prior means:
$$
\begin{array}{r c l c r}{{\theta_{11}:}}&{{\quad}}&{{\frac{4}{11}}}&{{=}}&{{\underline{{0.3636}}}}\\ {{\theta_{10}:}}&{{\quad}}&{{\frac{2}{11}}}&{{=}}&{{\underline{{0.1818}}}}\\ {{\theta_{01}:}}&{{\quad}}&{{\frac{2}{11}}}&{{=}}&{{\underline{{0.1818}}}}\\ {{\theta_{00}:}}&{{\quad}}&{{\frac{3}{11}}}&{{=}}&{{\underline{{0.2727}}}}\end{array}
$$
Prior variances:
$$
\begin{array}{r c l c r c l}{{\theta_{11}:}}&{{}}&{{\displaystyle\frac{4}{12\times11}-\frac{4^{2}}{11^{2}\times12}}}&{{=}}&{{\displaystyle\underline{{0.019284}}}}\\ {{\theta_{10}:}}&{{}}&{{\displaystyle\frac{2}{12\times11}-\frac{2^{2}}{11^{2}\times12}}}&{{=}}&{{\displaystyle\underline{{0.012397}}}}\\ {{\theta_{01}:}}&{{}}&{{\displaystyle\frac{2}{12\times11}-\frac{2^{2}}{11^{2}\times12}}}&{{=}}&{{\displaystyle\underline{{0.012397}}}}\\ {{}}&{{}}&{{}}&{{}}&{{}}\\ {{\theta_{00}:}}&{{}}&{{\displaystyle\frac{3}{12\times11}-\frac{3^{2}}{11^{2}\times12}}}&{{=}}&{{\displaystyle\underline{{0.016529}}}}\end{array}
$$
(b) Posterior distribution is Dirichlet(4+25, 2+7, 2+6, 3+13). That is Dirichlet(29,9,8,16).
(c) Now $A_{1}=29+9+8+16=62$ .
The posterior means are
$$
{\frac{a_{1,i}}{A_{1}}}.
$$
The posterior variances are
$$
{\frac{a_{1,i}}{(A_{1}+1)A_{1}}}-{\frac{a_{1,i}^{2}}{A_{1}^{2}(A_{1}+1)}}.
$$
Posterior means:
$$
\begin{array}{l l l l}{{\theta_{11}:}}&{{\qquad\frac{29}{62}}}&{{=}}&{{\underline{{{0.4677}}}}}\\ {{\theta_{10}:}}&{{\qquad\frac{9}{62}}}&{{=}}&{{\underline{{{0.1452}}}}}\\ {{\theta_{01}:}}&{{\qquad\frac{8}{62}}}&{{=}}&{{\underline{{{0.1290}}}}}\\ {{\theta_{00}:}}&{{\qquad\frac{16}{62}}}&{{=}}&{{\underline{{{0.2581}}}}}\end{array}
$$
Posterior variances:
$$
\begin{array}{r c l c r}{{\theta_{11}:}}&{{\quad}}&{{\displaystyle\frac{29}{63\times62}-\frac{29^{2}}{62^{2}\times63}}}&{{=}}&{{\displaystyle\underline{{0.003952}}}}\\ {{\theta_{10}:}}&{{\null}}&{{\displaystyle\frac{9}{63\times62}-\frac{9^{2}}{62^{2}\times63}}}&{{=}}&{{\displaystyle\underline{{0.001970}}}}\\ {{\theta_{01}:}}&{{\null}}&{{\displaystyle\frac{8}{63\times62}-\frac{8^{2}}{62^{2}\times63}}}&{{=}}&{{\displaystyle\underline{{0.001784}}}}\\ {{\theta_{00}:}}&{{\null}}&{{\displaystyle\frac{16}{63\times62}-\frac{16^{2}}{62^{2}\times63}}}&{{=}}&{{\displaystyle\underline{{0.003039}}}}\end{array}
$$
(d) Posterior distribution for $\theta_{00}$ is $\mathrm{{beta}(16,\ 62-16)}$ . That is $\mathrm{\beta(16,46)}$ .
Using the R command hpdbeta(0.95,16,46) gives $0.15325<\theta_{00}<0.36724$ | I recorded the attendance of students at tutorials for a module. Suppose that we can, in some sense, regard the students as a sample from some population of students so that, for example, we can learn about the likely behaviour of next year’s students by observing this year’s. At the time I recorded the data we had had tutorials in Week 2 and Week 4. Let the probability that a student attends in both weeks be $\theta_{11}$ , the probability that a student attends in week 2 but not Week 4 be $\theta_{10}$ and so on. The data are as follows.
<html><body><table><tr><td>Attendance</td><td>Probability</td><td>Observedf frequency</td></tr><tr><td>Week2a andWeek4</td><td>011</td><td>11 =25</td></tr><tr><td>Week 2 but not Week 4</td><td>010</td><td>n10 = 7</td></tr><tr><td>Week4butnot tWeek2</td><td>001</td><td>n01 =6</td></tr><tr><td>Neither week</td><td>0o0</td><td>noo =13</td></tr></table></body></html>
Suppose that the prior distribution for $(\theta_{11},\theta_{10},\theta_{01},\theta_{00})$ is a Dirichlet distribution with density proportional to
$$
\theta_{11}^{3}\theta_{10}\theta_{01}\theta_{00}^{2}
$$
(a) Find the prior means and prior variances of $\theta_{11},\ \theta_{10},\ \theta_{01},\ \theta_{00}$ .
(b) Find the posterior distribution.
(c) Find the posterior means and posterior variances of $\theta_{11},\ \theta_{10},\ \theta_{01},\ \theta_{00}.$ .
(d) Using the R function hpdbeta which may be obtained from the Web page (or otherwise), find a $95\%$ posterior hpd interval, based on the exact posterior distribution, for $\theta_{00}$ . |
Let $\begin{array}{r}{N=\sum_{j=1}^{J}n_{j}}\end{array}$ The likelihood is
$$
\begin{array}{r l}{L}&{={\displaystyle\prod_{j=1}^{J\neq1}(2\pi)^{-|\!\!-\!\!1/2|}\tau^{1/2}\tau^{1/2}\exp\left\{-\frac{\tau}{2}(y_{i,j}-\mu_{i})^{2}\right\}}}\\ &{={\displaystyle}}\\ &{={\displaystyle}}\\ &{=}&{(2\pi)^{-N/2}\tau^{N/2}\exp\left\{-\frac{\tau}{2}\sum_{j=1\pm1}^{J\neq}(y_{i,j}-\mu_{j})^{2}\right\}}\\ &{={\displaystyle}}\\ &{=}&{(2\pi)^{-N/2}\tau^{N/2}\exp\left\{-\frac{\tau}{2}\sum_{j=1\pm1}^{J\neq}(y_{i,j}-\tilde{y}_{j}+\tilde{y}_{j}-\mu_{j})^{2}\right\}}\\ &{={\displaystyle}}&{(2\pi)^{-N/2}\tau^{N/2}\exp\left\{-\frac{\tau}{2}\left[\sum_{j=1\pm1}^{J\neq}(y_{i,j}-\tilde{y}_{j})^{2}+\sum_{j=1}^{J}(\tilde{y}_{j}-\mu_{j})^{2}+2\sum_{j=1\pm1}^{J\neq}(y_{i,j}-\tilde{y}_{j})\big(\tilde{y}_{j}-\tilde{y}_{j}\big)\right]}\\ &{={\displaystyle}}&{(2\pi)^{-N/2}\tau^{N/2}\exp\left\{-\frac{\tau}{2}\left[\sum_{j=1\pm1}^{J\neq}(y_{i,j}-\tilde{y}_{j})^{2}+\sum_{j=1\pm1}^{J}(\tilde{y}_{j}-\mu_{j})^{2}\right]\right\}}\end{array}
$$
since
$$
\sum_{j=1}^{J}\sum_{i=1}^{n_{j}}(y_{i,j}-{\bar{y}}_{j})({\bar{y}}_{j}-\mu_{j})=\sum_{j=1}^{J}\left\{({\bar{y}}_{j}-\mu_{j})\sum_{i=1}^{n_{j}}(y_{i,j}-{\bar{y}}_{j})\right\}=0.
$$
Hence
$$
{\cal L}=(2\pi)^{-J/2}\tau^{J/2}\exp\left\{-\frac{\tau}{2}\left[S+\sum_{j=1}^{J}n_{j}(\bar{y}_{j}-\mu_{j})^{2}\right]\right\}
$$
in which the data only appear through $S$ and $\underline{{\bar{y}}}$ . Hence $S$ and $\underline{y}$ are sufficient for $\tau$ and $\underline{{\boldsymbol\mu}}$ . | Suppose that we have $J$ samples and, given the parameters, observation $i$ in sample $j$ is
$$
y_{i,j}\sim N(\mu_{j},\ \tau^{-1})
$$
for $i=1,\dots,n_{j}$ and $j=1,\dots,J$ .
Let $\underline{{\boldsymbol{\mu}}}=(\mu_{1},\ldots,\mu_{J})^{T}$ , let $\bar{\underline{{y}}}=(\bar{y}_{1},\dots,\bar{y}_{J})^{T}$ , and let
$$
S=\sum_{j=1}^{J}\sum_{i=1}^{n_{j}}(y_{i,j}-\Bar{y}_{j})^{2},
$$
where
$$
{\bar{y}}_{j}={\frac{1}{n_{j}}}\sum_{i=1}^{n_{j}}y_{i,j}.
$$
Show that $\underline{y}$ and $S$ are sufficient for $\underline{{\boldsymbol\mu}}$ and $\tau$ . |
Likelihood:
$$
\begin{array}{r c l}{{L}}&{{=}}&{{\displaystyle\prod_{i=1}^{n}\frac{\beta^{\alpha}y_{i}^{\alpha-1}e^{-\beta y_{i}}}{\Gamma(\alpha)}}}\\ {{}}&{{=}}&{{\displaystyle\frac{\beta^{n\alpha}}{[\Gamma(\alpha)]^{n}}T_{2}^{\alpha-1}e^{-\beta T_{1}}}}\\ {{}}&{{=}}&{{g(\alpha,\beta,T_{1},T_{2})h(y)}}\end{array}
$$
where $h(\underline{{y}})=1$
So, by the factorisation theorem, $T_{1},T_{2}$ are sufficient for $\alpha,\beta$ . | We make $n$ observations $y_{1},\ldots,y_{n}$ , which, given that values of parameters $\alpha,~\beta$ , are independent observations from a $\mathrm{gamma}(\alpha,\beta)$ distribution. Show that the statistics $T_{1},\ T_{2}$ are sufficient for $\alpha,~\beta$ where
$$
T_{1}=\sum_{i=1}^{n}y_{i}\qquad\qquad\qquad\mathrm{and}\qquad\qquad\qquad T_{2}=\prod_{i=1}^{n}y_{i}.
$$ |
(a) Prior mean: $M_{0}=2.5$
Prior precision:
$$
P_{0}=\frac{1}{0.5^{2}}=4
$$
Data precision:
$$
n\tau={\frac{10}{0.05^{2}}}=4000
$$
Posterior precision: $P_{1}=4+4000=4004$
Sample mean: $\bar{y}=3.035$
Posterior mean:
$$
M_{1}={\frac{4\times2.5+4000\times3.035}{4004}}=3.0345
$$
Posterior variance:
$$
{\frac{1}{4004}}=0.000250
$$
Posterior distribution:
$$
\log\theta\sim N(3.0345,\ 0.000250)
$$
(2 marks)
(b) Posterior interval for $\log\theta$ :$M_{1}\pm1.96\sqrt{1/P_{1}}$
$3.0035<\log\theta<3.0655$
(1 mark)
(c) Posterior interval for $\theta$ :
$$
e^{3.0035}<\theta<e^{3.0655}
$$
(1 mark)
(d) Posterior probability:
$$
{\begin{array}{l l l}{\operatorname*{Pr}(\theta<20)}&{=}&{\operatorname*{Pr}(\log\theta<\log20=2.9957)}\\ &{=}&{\Phi\left({\frac{2.9957-3.0345}{\sqrt{0.000250}}}\right)}\\ &{=}&{\Phi(-2.45515)=0.0070}\end{array}}
$$ | Ten measurements are made using a scientific instrument. Given the unknown value of a quantity $\theta$ , the natural logarithms of the measurements are independent and normally distributed with mean $\log\theta$ and known standard deviation 0.05.
Our prior distribution is such that $\log\theta$ has a normal distribution with mean 2.5 and standard deviation 0.5.
The logarithms of the measurements are as follows.
2.99 3.03 3.04 3.01 3.12 2.98 3.03 2.98 3.07 3.10
(a) Find the posterior distribution of $\log\theta$ .
(b) Find a symmetric 95% posterior interval for $\log\theta$ .
(c) Find a symmetric $95\%$ posterior interval for $\theta$ .
(d) Find the posterior probability that $\theta<20.0$ . |
(a) Prior density proportional to
$$
\prod_{j=1}^{12}\theta_{j}^{2-1}
$$
Likelihood proportional to
$$
\prod_{j=1}^{12}\theta_{j}^{x_{j}}
$$
Posterior density proportional to
$$
\prod_{j=1}^{12}\theta_{j}^{x_{j}+2-1}
$$
i.e. ${\mathrm{Dirichlet}}(x_{1}+2,\ x_{2}+2,\ .\dots\ ,x_{12}+2)$ Posterior distribution is Dirichlet(68, 65, 66, 50, 66, 76, 72, 61, 56, 53, 47, 44)
(2 marks)
(b) Posterior mean for $\theta_{j}$ is
$$
\frac{x_{j}+2}{\sum(x_{i}+2)}=\frac{x_{j}+2}{\sum x_{i}+24}
$$
<html><body><table><tr><td>January 0.09392</td><td>February 0.08978</td><td>March 0.09116</td><td>April 0.06906</td><td>May 0.09116</td><td>June 0.10497</td></tr><tr><td>July</td><td>August</td><td>September</td><td>October</td><td>November</td><td>December</td></tr><tr><td>0.09945</td><td>0.08425</td><td>0.07735</td><td>0.07320</td><td>0.06492</td><td>0.06077</td></tr></table></body></html>
(1 mark)
(c) $\sum(x_{j}+2)=724$ Marginal distribution for $\theta_{j}$ is $\mathrm{beta}(x_{j}+2,\ 722-x_{j})$
k<-1/12 prob<-1-pbeta(k,births $^{+2}$ ,722-births)
Probability $\theta_{j}>1/12$ :
<html><body><table><tr><td>January 0.8357</td><td>February 0.7205</td><td>March 0.7630</td><td>April 0.0709</td><td>May 0.7630</td><td>June 0.9772</td></tr><tr><td>July</td><td>August</td><td>September</td><td>October</td><td>November</td><td>December</td></tr><tr><td>0.9322</td><td>0.5209</td><td>0.2650</td><td>0.1480</td><td>0.0286</td><td>0.0096</td></tr></table></body></html>
It seems very likely that some months have more than their “fair share” of births and some less. Of course there might have been something unusual about the period when the data were collected but, assuming there was not, then it seems very likely that the months are in fact different – even if we allowed for their different lengths. In particular, June and July seem to have high rates and April, November and December seem to have low rates.
Mark for reasonable comment.
(2 marks)
(d) If the posterior parameters are $a_{1,1},a_{1,2},\dots,a_{1,12}$ then the joint posterior distribution of $\theta_{1},\ \theta_{2}$ ,${\tilde{\theta}}_{2}$ is Diric $\mathrm{hlet}(a_{1,1},a_{1,2},A_{1}-a_{1,1}-a_{1,2})$ where $A_{1}=\sum a_{i,j}$ .Therefore the distribution is $\mathrm{Dirichlet}(68,\,65,\,591)$ . | Walser (1969) gave the following data on the month of giving birth for 700 women giving birth for the first time. The births took place at the University Hospital of Basel, Switzerland.
<html><body><table><tr><td>Month</td><td></td><td>No. of births</td><td>Month</td><td>No. of births</td><td>Month</td><td>No. of births</td></tr><tr><td>1</td><td>January</td><td>66</td><td>5 May</td><td>64</td><td>9. September</td><td>54</td></tr><tr><td>2</td><td>February</td><td>63</td><td>6 June</td><td>74</td><td>10 October</td><td>51</td></tr><tr><td>3</td><td>March</td><td>64</td><td>7 July</td><td>70</td><td>11 November</td><td>45</td></tr><tr><td>4</td><td>April</td><td>48</td><td>8 August</td><td>59</td><td>12 December</td><td>42</td></tr></table></body></html>
We have unknown parameters $\theta_{1},\dots,\theta_{12}$ where, given the values of these parameters, the probability that one of these births takes place in month $j$ is $\theta_{j}$ and January is month 1, February is month 2 and so on through to December which is month 12. Given the parameters, the birth dates are assumed to be independent.
Our prior distribution for $\theta_{1},\dots,\theta_{12}$ is a Dirichlet distribution with parameters $a_{1}=a_{2}=$ $\dots-a_{12}=2$ .
(a) Find the posterior distribution of $\theta_{1},\dots,\theta_{12}$ .
(b) For each of $j=1,\dots,12$ , find the posterior mean of $\theta_{j}$ .
(c) For each of $j=1,\cdot\cdot\cdot,12$ , find the posterior probability that $\theta_{j}>1/12$ and comment on the results.
(d) Find the joint posterior distribution of $\theta_{1},\theta_{2},\Tilde{\theta}_{2}$ , where $\tilde{\theta}_{2}=1-\theta_{1}-\theta_{2}$ .
Note: You may use R for the calculations but give the commands which you use with your solution. |
(a) Likelihood:
$$
\begin{array}{r c l}{{{\cal L}}}&{{=}}&{{\displaystyle\prod_{i=1}^{m}\left(\begin{array}{c}{{n}}\\ {{x_{i}}}\end{array}\right)\theta^{x_{i}}(1-\theta)^{n-x_{i}}}}\\ {{}}&{{=}}&{{\displaystyle\left\{\prod_{i=1}^{m}\left(\begin{array}{c}{{n}}\\ {{x_{i}}}\end{array}\right)\right\}\theta^{s}(1-\theta)^{n m-s}}}\\ {{}}&{{=}}&{{g(\theta,s)h(\underline{{{x}}})}}\end{array}
$$
where $g(\theta,s)=\theta^{s}(1-\theta)^{n m-s}$ and $\begin{array}{r}{h(\underline{{x}})=\prod_{i=1}^{m}\left(\begin{array}{c}{n}\\ {x_{i}}\end{array}\right).}\end{array}$
Hence, by the factorisation theorem, $\boldsymbol{s}$ is sufficient for $\theta$ .
(b) Likelihood:
$$
\begin{array}{r c l}{{L}}&{{=}}&{{\displaystyle\prod_{i=1}^{m}\left(\begin{array}{c}{{y_{i}-1}}\\ {{r-1}}\end{array}\right)\theta^{r}(1-\theta)^{y_{i}-r}}}\\ {{}}&{{=}}&{{\displaystyle\left\{\prod_{i=1}^{m}\left(\begin{array}{c}{{y_{i}-1}}\\ {{r-1}}\end{array}\right)\right\}\theta^{m r}(1-\theta)^{t-m r}}}\\ {{}}&{{=}}&{{g(\theta,t)h(y)}}\end{array}
$$
where $g(\theta,t)=\theta^{m r}(1-\theta)^{t-m r}$ and $\begin{array}{r}{h(\underline{{y}})=\prod_{i=1}^{m}\left(\begin{array}{c}{y_{i}-1}\\ {r-1}\end{array}\right).}\end{array}$
Hence, by the factorisation theorem, $t$ is sufficient for $\theta$ .
(c) Prior density proportional to $\theta^{a-1}(1-\theta)^{b-1}$
i. Likelihood 1 proportional to $\theta^{r}(1-\theta)^{y-r}$ Hence posterior 1 proportional to $\theta^{a+r-1}(1-\theta)^{b+y-r-1}$ That is we have a $\operatorname{beta}(a+r,\ b+y-r)$ distribution.
ii. Likelihood 2 proportional to Hence posterior 2 proportional to $\theta^{x}(1-\theta)^{n-x}$ $\begin{array}{r l}{\quad}&{{}-\theta)^{\prime\prime}~~\cdots}\\ {\theta^{a+r+x-1}(1-\theta)^{b+y+n-r-x-1}}\end{array}$ That is we have a ${\mathrm{beta}}(a+r+x$ ,$b+y+n-r-x)$ distribution. | Potatoes arrive at a crisp factory in large batches. Samples are taken from each batch for quality checking. Assume that each potato can be claasified as “good” or “bad” and that, given the value of a parameter $\theta$ , potatoes are independent and each has probability $\theta$ of being “bad.”
(a) Suppose that $m$ samples, each of fixed size $n$ , are chosen and that the numbers of bad potatoes found are $x_{1},\ldots,x_{m}$ . Show that
$$
s=\sum_{i=1}^{m}x_{i}
$$
is sufficient for $\theta$ .
(b) Suppose that potatoes are examined one at a time until a fixed number $r$ of bad potatoes is found. Let the number of potatoes examined when the $r^{\mathrm{th}}$ bad potato is found be $_y$ .This process is repeated $m$ times and the values of $_y$ are $y_{1},\ldots,y_{m}$ . Show that
$$
t=\sum_{i=1}^{m}y_{i}
$$
is sufficient for $\theta$ .
(c) Suppose that we have a prior distribution for $\theta$ which is a $\mathrm{beta}(a,b)$ distribution. A two-stage inspection procedure is adopted. In Stage 1 potatoes are examined one at a time until a fixed number $r$ of bad potatoes is found. The $r^{\mathrm{th}}$ bad potato found is the $y^{\mathrm{th}}$ to be examined. In Stage 2 a further $n$ potatoes are examined and $x$ of these are found to be bad.
i. Find the posterior distribution of $\theta$ after Stage 1.
ii. Find the posterior distribution of $\theta$ after Stage 1 and Stage 2. |
(a) Prior distribution is Dirichlet(4,2,2,3). So $A_{0}=4+2+2+3=11$ .The prior means are
$$
{\frac{a_{0,i}}{A_{0}}}.
$$
The prior variances are
$$
\frac{a_{0,i}}{(A_{0}+1)A_{0}}-\frac{a_{0,i}^{2}}{A_{0}^{2}(A_{0}+1)}.
$$
Prior means:
$$
\begin{array}{r c l c r}{{\theta_{11}:}}&{{\quad}}&{{\frac{4}{11}}}&{{=}}&{{\underline{{0.3636}}}}\\ {{\theta_{10}:}}&{{\quad}}&{{\frac{2}{11}}}&{{=}}&{{\underline{{0.1818}}}}\\ {{\theta_{01}:}}&{{\quad}}&{{\frac{2}{11}}}&{{=}}&{{\underline{{0.1818}}}}\\ {{\theta_{00}:}}&{{\quad}}&{{\frac{3}{11}}}&{{=}}&{{\underline{{0.2727}}}}\end{array}
$$
Prior variances:
$$
\begin{array}{r c l c r c l}{{\theta_{11}:}}&{{}}&{{\displaystyle\frac{4}{12\times11}-\frac{4^{2}}{11^{2}\times12}}}&{{=}}&{{\displaystyle\underline{{0.019284}}}}\\ {{\theta_{10}:}}&{{}}&{{\displaystyle\frac{2}{12\times11}-\frac{2^{2}}{11^{2}\times12}}}&{{=}}&{{\displaystyle\underline{{0.012397}}}}\\ {{\theta_{01}:}}&{{}}&{{\displaystyle\frac{2}{12\times11}-\frac{2^{2}}{11^{2}\times12}}}&{{=}}&{{\displaystyle\underline{{0.012397}}}}\\ {{}}&{{}}&{{}}&{{}}&{{}}\\ {{\theta_{00}:}}&{{}}&{{\displaystyle\frac{3}{12\times11}-\frac{3^{2}}{11^{2}\times12}}}&{{=}}&{{\displaystyle\underline{{0.016529}}}}\end{array}
$$
NOTE: Suppose that we are given prior means $m_{1},\ldots,m_{4}$ and one prior standard deviation $s_{1}$ . Then
$$
a_{0i}=m_{i}A_{0}
$$
and
$$
s_{1}^{2}=\frac{m_{1}A_{0}}{(A_{0}+1)A_{0}}-\frac{m_{1}^{2}A_{0}^{2}}{A_{0}^{2}(A_{0}+1)}=\frac{m_{1}(1-m_{1})}{A_{0}+1}.
$$
Hence
$$
A_{0}+1={\frac{m_{1}(1-m_{1})}{s_{1}^{2}}}={\frac{0.3636(1-0.3636)}{0.019284}}=12.
$$
Hence $A_{0}=11$ and $a_{0i}=11m_{i}$ . For example $a_{01}=11\times0.3636=4$ .
(b) Posterior distribution is Dirichlet(4+25, 2+7, 2+6, 3+13). That is Dirichlet(29,9,8,16).
(c) Now $A_{1}=29+9+8+16=62$ .
The posterior means are
$$
{\frac{a_{1,i}}{A_{1}}}.
$$
The posterior variances are
$$
\frac{a_{1,i}}{(A_{1}+1)A_{1}}-\frac{a_{1,i}^{2}}{A_{1}^{2}(A_{1}+1)}.
$$
Posterior means:
$$
\begin{array}{l l l l}{{\theta_{11}:}}&{{\qquad\frac{29}{62}}}&{{=}}&{{\underline{{{0.4677}}}}}\\ {{\theta_{10}:}}&{{\qquad\frac{9}{62}}}&{{=}}&{{\underline{{{0.1452}}}}}\\ {{\theta_{01}:}}&{{\qquad\frac{8}{62}}}&{{=}}&{{\underline{{{0.1290}}}}}\\ {{\theta_{00}:}}&{{\qquad\frac{16}{62}}}&{{=}}&{{\underline{{{0.2581}}}}}\end{array}
$$
Posterior variances:
$$
\begin{array}{r c l c r}{{\theta_{11}:}}&{{\quad}}&{{\displaystyle\frac{29}{63\times62}-\frac{29^{2}}{62^{2}\times63}}}&{{=}}&{{\displaystyle\underline{{0.003952}}}}\\ {{\theta_{10}:}}&{{\null}}&{{\displaystyle\frac{9}{63\times62}-\frac{9^{2}}{62^{2}\times63}}}&{{=}}&{{\displaystyle\underline{{0.001970}}}}\\ {{\theta_{01}:}}&{{\null}}&{{\displaystyle\frac{8}{63\times62}-\frac{8^{2}}{62^{2}\times63}}}&{{=}}&{{\displaystyle\underline{{0.001784}}}}\\ {{\theta_{00}:}}&{{\null}}&{{\displaystyle\frac{16}{63\times62}-\frac{16^{2}}{62^{2}\times63}}}&{{=}}&{{\displaystyle\underline{{0.003039}}}}\end{array}
$$
(d) Posterior distribution for $\theta_{00}$ is beta(16, 62 16). That is beta(16,46).
Using the R command hpdbeta(0.95,16,46) gives $0.15325<\theta_{00}<0.36724$ .
(e) The log posterior density is (apart from a constant)
$$
\sum_{j=1}^{4}(a_{1,j}-1)\log\theta_{j}.
$$
Add $\lambda(\sum_{j=1}^{j}\theta_{j}-1)$ to this and differentiate wrt $\theta_{j}$ then set the derivative equal to zero. This gives
$$
\frac{a_{1,j}-1}{\hat{\theta}_{j}}+\lambda=0
$$
which leads to
$$
{\widehat{\theta}}_{j}=-{\frac{(a_{1,j}-1)}{\lambda}}.
$$
However $\textstyle\sum_{j=1}^{4}\theta_{j}=1$ so
$$
-\sum_{j=1}^{4}{\frac{(a_{1,j}-1)}{\lambda}}=1
$$
so
$$
\lambda=-\sum_{j=1}^{4}(a_{1,j}-1)
$$
and
$$
\hat{\theta}_{j}=\frac{a_{1,j}-1}{\sum a_{1,k}-4}.
$$
Hence the posterior mode for $\theta_{00}$ is
$$
\hat{\theta}_{00}=\frac{15}{58}=\underline{{{0.2586}}}.
$$
The second derivatives of the log likelihood are
$$
\frac{\partial^{2}l}{\partial\theta_{j}^{2}}=-\frac{a_{1,j}-1}{\theta_{j}^{2}}\mathrm{~\\\\\\\~and~\\\\\\}\frac{\partial^{2}l}{\partial\theta_{j}\partial\theta_{k}}=0.
$$
Since the mixed partial second derivatives are zero, the information matrix is diagonal and the posterior variance of $\theta_{j}$ is approximately
$$
\frac{\hat{\theta}_{j}^{2}}{a_{1,j}-1}=\frac{(a_{1,j}-1)^{2}}{(a_{1,j}-1)(\sum a_{1,k}-4)^{2}}=\frac{(a_{i,j}-1}{(\sum a_{1,k}-4)^{2}}.
$$
The posterior variance of $\theta_{00}$ is approximately
$$
\frac{15}{58^{2}}=\underline{{{0.00445898}}}.
$$
The approximate 95% hpd interval is $0.2586\pm1.96\sqrt{0.00445898}$ . that is
$$
0.12772<\theta_{00}<0.38948
$$
This is a little wider than the exact interval.
(f) Approximation based on posterior mode and curvature: Posterior modes:
$$
\theta_{11}:\ \frac{28}{58}\qquad\quad\theta_{10}:\ \frac{8}{58}\qquad\quad\theta_{01}:\ \frac{7}{58}
$$
So, approx. posterior mean of $\mu$ is
$$
2\times{\frac{28}{58}}+{\frac{8}{58}}+{\frac{7}{58}}={\frac{71}{58}}={\underline{{1.22414}}}.
$$
Approx. posterior variances:
$$
\theta_{11}:\ \frac{28}{58^{2}}\qquad\quad\theta_{10}:\ \frac{8}{58^{2}}\qquad\quad\theta_{01}:\ \frac{7}{58^{2}}
$$
Since the (approx.) covariances are all zero, the approx. posterior variance of $\mu$ is
$$
4\times{\frac{28}{58^{2}}}+{\frac{8}{58^{2}}}+{\frac{7}{58^{2}}}={\frac{127}{58^{2}}}=0.0377527
$$
so approx. standard deviation is
$$
{\sqrt{0.0377527}}=\underline{{0.1943}}.
$$
N.B. There is an alternative exact calculation, as follows, which is also acceptable. Posterior mean:
$$
2\times{\frac{29}{62}}+{\frac{9}{62}}+{\frac{8}{62}}=\underline{{{1.20968}}}.
$$
Posterior covariances:
$$
-\frac{a_{1,j}a_{1,k}}{A_{1}^{2}(A_{1}+1)}
$$
$$
{\begin{array}{r c l}{\operatorname{var}(\mu)}&{=}&{4\operatorname{var}(\theta_{11})+\operatorname{var}(\theta_{10})+\operatorname{var}(\theta_{01})}\\ &&{+4\operatorname{covar}(\theta_{11},\theta_{10})+4\operatorname{covar}(\theta_{11},\theta_{01})+2\operatorname{covar}(\theta_{10},\theta_{01})}\\ &{=}&{4\left({\frac{29}{63\times62}}-{\frac{29^{2}}{63\times62^{2}}}\right)+\left({\frac{9}{63\times62}}-{\frac{9^{2}}{63\times62^{2}}}\right)+\left({\frac{8}{63\times62}}-{\frac{8^{2}}{63\times62^{2}}}\right)}\\ &&{-4\left({\frac{29\times9}{63\times62^{2}}}\right)-4\left({\frac{29\times8}{63\times62^{2}}}\right)-2\left({\frac{9\times8}{63\times62^{2}}}\right)}\\ &{=}&{{\frac{133}{63\times62}}-{\frac{5625}{63\times62^{2}}}=0.0108229.}\end{array}}
$$
So the standard deviation is 0.1040. The difference is quite big! | (Some of this question is also in Problems 4). I recorded the attendance of students at tutorials for a module. Suppose that we can, in some sense, regard the students as a sample from some population of students so that, for example, we can learn about the likely behaviour of next year’s students by observing this year’s. At the time I recorded the data we had had tutorials in Week 2 and Week 4. Let the probability that a student attends in both weeks be $\theta_{11}$ , the probability that a student attends in week 2 but not Week 4 be $\theta_{10}$ and so on. The data are as follows.
<html><body><table><tr><td>Attendance</td><td>Probability</td><td>Observed frequency</td></tr><tr><td>Week2andWeek4</td><td>011</td><td>11 =25</td></tr><tr><td>Week2butnot Week4</td><td>010</td><td>N10 = 7</td></tr><tr><td>Week4butnot Week2</td><td>001</td><td>n01 =6</td></tr><tr><td>Neither rweek</td><td>0o0</td><td>noo 0=13</td></tr></table></body></html>
Suppose that the prior distribution for $(\theta_{11},\theta_{10},\theta_{01},\theta_{00})$ is a Dirichlet distribution with density proportional to
$$
\theta_{11}^{3}\theta_{10}\theta_{01}\theta_{00}^{2}
$$
(a) Find the prior means and prior variances of $\theta_{11}$ ,$\theta_{10}$ ,$\theta_{01}$ ,$\theta_{00}$ .
(b) Find the posterior distribution.
(c) Find the posterior means and posterior variances of $\theta_{11},\ \theta_{10}$ ,$\theta_{01}$ ,$\theta_{00}$ .
(d) Using the R function hpdbeta which may be obtained from the Web page (or otherwise), find a $95\%$ posterior hpd interval, based on the exact posterior distribution, for θ00.
(e) Find an approximate $95\%$ hpd interval for $\theta_{00}$ using a normal approximation based on the posterior mode and the partial second derivatives of the log posterior density. Compare this with the exact hpd interval. Hint: To find the posterior mode you will need to introduce a Lagrange multiplier.
(f) The population mean number of attendances out of two is $\mu=2\theta_{11}+\theta_{10}+\theta_{01}$ . Find the posterior mean of $\mu$ and an approximation to the posterior standard deviation of $\mu$ . |
From the data
$$
\sum_{i=1}^{n}y_{i}=1028.9\qquad\qquad\qquad\sum_{i=1}^{n}y_{i}^{2}=53113.73
$$
$$
{\bar{y}}=51.445\qquad\quad s_{n}^{2}={\frac{1}{n}}\sum_{i=1}^{n}(y_{i}-{\bar{y}})^{2}={\frac{1}{n}}\left\{53113.73-{\frac{1}{20}}1028.9^{2}\right\}=9.09848
$$
(a) Prior mean: $M_{0}=60.0$
Prior precision: $P_{0}=1/20^{2}=0.0025$ Data precision: $P_{d}=n\tau=20\times0.1=2$ Posterior precision: $P_{1}=P_{0}+P_{d}=2.0025$
Posterior mean:
$$
M_{1}={\frac{0.0025\times60.0+2\times51.445}{2.0025}}=51.4557
$$
Posterior std. dev.:
$$
{\sqrt{\frac{1}{2.0025}}}=0.706665
$$
$95\%$ hpd interval: $51.4557\pm1.96\times0.706665$ . That is
$$
50.0706<\mu<52.8408
$$
(b) Prior $\tau\sim\mathrm{gamma}(d/2,\ d v/2)$ where
$$
\frac{d/2}{d v/2}=\frac{1}{v}=0.1
$$
so $v=10$ and
$$
\sqrt{\frac{d/2}{(d v/2)^{2}}}=\frac{1}{v}\sqrt{\frac{2}{d}}=0.05
$$
so $\sqrt{2/d}=0.5$ so $2/d=0.25$ so $d=8$ .
Hence
$$
\begin{array}{r c l}{{d_{0}}}&{{=}}&{{8}}\\ {{v_{0}}}&{{=}}&{{10}}\\ {{c_{0}}}&{{=}}&{{0.025}}\\ {{m_{0}}}&{{=}}&{{60.00}+{n\bar{p}}}\\ {{m_{1}}}&{{=}}&{{{\frac{{\omega}_{0}(10)+{n\bar{p}}}{c_{0}+n}}={\frac{0.025\times60.0+1028.9}{20.025}}=5.1.4557}}\\ {{c_{1}}}&{{=}}&{{{c_{0}}+n=20.025}}\\ {{d_{1}}}&{{=}}&{{d_{0}+n=28}}\\ {{r^{2}}}&{{=}}&{{{\frac{1}{n}}\sum(y_{i}-m_{0})^{2}-({\bar{y}}-m_{0})^{2}+s_{n}^{2}}}\\ {{}}&{{=}}&{{(51.45\cdot60)^{2}+9.09848=82.2865}}\\ {{v_{d}}}&{{=}}&{{{\frac{c_{0}r^{2}}{c_{0}+n}}}}\\ {{v_{i}}}&{{=}}&{{{\frac{c_{0}v_{i}}{c_{0}+n}}+n.189846}}\\ {{v_{1}}}&{{=}}&{{{\frac{d_{0}v_{i}+m_{d}}{c_{0}+n}}=9.42132}}\end{array}
$$
95% hpd interval:
$$
M_{1}\pm t_{28}\sqrt{\frac{v_{1}}{c_{1}}}
$$
That is
$$
M_{1}\pm2.048\times0.68591
$$
That is
$$50.051 < \mu < 52.860$$ | Samples are taken from twenty wagonloads of an industrial mineral and analysed. The amounts in ppm (parts per million) of an impurity are found to be as follows.
We regard these as independent samples from a normal distribution with mean $\mu$ and variance $\sigma^{2}=\tau^{-1}$ .
Find a 95% posterior hpd interval for $\mu$ under each of the following two conditions.
(a) The value of $\tau$ is known to be 0.1 and our prior distribution for $\mu$ is normal with mean 60.0 and standard deviation 20.0.
(b) The value of $\tau$ is unknown. Our prior distribution for $\tau$ is a gamma distribution with mean 0.1 and standard deviation 0.05. Our conditional prior distribution for $\mu$ given $\tau$ is normal with mean 60.0 and precision $0.025\tau$ (that is, standard deviation $\sqrt{40}\tau^{-1/2}$ ). |
(a) We have
$$
\begin{array}{r c l}{{P_{0}}}&{{=}}&{{0.01}}\\ {{P_{d}}}&{{=}}&{{n\tau=30\times0.04=1.2}}\\ {{P_{1}}}&{{=}}&{{0.01+1.2=1.21}}\\ {{M_{0}}}&{{=}}&{{20}}\\ {{\bar{y}}}&{{=}}&{{22.4}}\\ {{M_{1}}}&{{=}}&{{\frac{P_{0}M_{0}+P_{d}\bar{y}}{P_{1}}=\frac{0.01\times20+1.2\times22.4}{1.21}=22.380}}\end{array}
$$
Posterior:
$$
\mu\sim N(22.380,\ 1.21^{-1}).\qquad\mathrm{That~is}\qquad\mu\sim N(22.380,\ 0.8264).
$$
$95\%$ hpd interval: $22.380\pm1.96\sqrt{0.8264}$ . That is
$$
\underline{{20.60<\mu<24.16}}
$$
(b) We have
$$
\begin{array}{r l}{d_{0}}&{=}\\ {d_{1}}&{=}\\ {\eta_{0}}&{=}\\ {\cot{\left(1-\frac{\eta_{0}}{\alpha}\right)}}\\ {c_{1}}&{=}\\ {\cot{\left(1-\frac{\eta_{0}}{\alpha}\right)}}\\ {m_{1}}&{=}\\ {m_{2}}&{=}\\ {\eta_{1}}&{=}\\ {m_{1}}&{=}\\ {\frac{\eta_{0}}{\alpha}}&{=}\\ {a_{2}^{2}}&{=}\\ {\frac{\left(1-\frac{\eta_{0}}{\alpha}\right)\left(\frac{1}{\alpha}\right)-\frac{\left(0+1\right)\alpha+30\alpha+24}{32}=2.23\alpha\cdot}\\ {\frac{\beta_{2}}{\alpha}}&{=}\\ {\beta_{3}^{2}}&{=}\\ {\frac{\left(\eta_{0}-\alpha\right)^{2}}{\alpha^{2}}+\frac{\left(1-\frac{\eta_{0}}{\alpha}\right)^{2}}{32}=\frac{30\alpha0\alpha^{2}}{\left(2+\alpha\right)^{2}}}\\ {\frac{\eta_{0}}{\alpha^{2}}}&{=}\\ {\frac{\eta_{0}}{\alpha^{2}}}&{=}\\ {\nu_{0}}&{=}\end{array}
$$
Marginal posterior distribution for $\tau$ :$d_{1}v_{1}\tau=11670.77\tau\sim\chi_{32}^{2}$ .Marginal posterior distribution for $\mu$ :
$$
\frac{\mu-m_{1}}{\sqrt{v_{1}/c_{1}}}=\frac{\mu-22.392}{\sqrt{36.27419/30.1}}\sim t_{32}
$$
95% hpd interval:
$$
m_{1}\pm2.037{\sqrt{\frac{v_{1}}{c_{1}}}}.
$$
That is
$$
22.392\pm2.037\sqrt{\frac{36.27419}{30.1}}.
$$
That is
$$
\begin{array}{r}{20.16<\mu<24.63.}\end{array}
$$ | We observe a sample of 30 observations from a normal distribution with mean $\mu$ and precision $\tau$ . The data, $y_{1},\dotsc,y_{30}$ , are such that
$$
\sum_{i=1}^{30}y_{i}=672\qquad\qquad\qquad{\mathrm{and}}\qquad\qquad\sum_{i=1}^{30}y_{i}^{2}=16193.
$$
(a) Suppose that the value of $\tau$ is known to be 0.04 and that our prior distribution for $\mu$ is normal with mean 20 and variance 100. Find the posterior distribution of $\mu$ and evaluate a posterior 95% hpd interval for $\mu$ .
(b) Suppose that we have a gamma $,(1,10)$ prior distribution for $\tau$ and our conditional prior distribution for $\mu$ given $\tau$ is normal with mean 20 and variance $(0.1\tau)^{-1}$ . Find the marginal posterior distribution for $\tau$ , the marginal posterior distribution for $\mu$ and the marginal posterior 95% hpd interval for $\mu$ . |
Data:
$$
n=15\qquad\sum y=-284\qquad\sum y^{2}=6518
$$
$$
{\bar{y}}=-18.9333\qquad\quad s_{n}^{2}={\frac{1}{15}}\left\{6518-{\frac{284^{2}}{15}}\right\}={\frac{1140.9333}{15}}=76.06222
$$
Calculate posterior:
$$
\begin{array}{r l}{d_{0}}&{=0.7}\\ {v_{0}}&{=2.02/0.7=2.8857}\\ {c_{0}}&{=0.003}\\ {m_{0}}&{=0}\\ {d_{1}}&{=d_{0}+15=15.7}\\ {\tilde{(}y-m_{0})^{2}}&{=\tilde{y}^{2}=358.4711}\\ {\tau^{2}}&{=(\tilde{y}-m_{0})^{2}+s_{n}^{2}=434.5333}\\ {v_{d}}&{=\frac{c_{0}r^{2}+m_{0}^{2}}{2}=\tilde{\tau}_{0.1339}}\\ {v_{1}}&{=\frac{d_{0}v_{0}+m_{0}}{d_{0}+n}=72.8681}\\ {c_{1}}&{=c_{0}+15}\\ {m_{1}}&{=\frac{c_{0}m_{0}+n}{2}=\frac{-284}{15.603}=15.929.}\end{array}
$$
(a) Marginal posterior distribution for $\tau$ is $\mathrm{{gamma}}(d_{1}/2,\ d_{1}\boldsymbol{v}_{1}/2)$ . That is
$$
\mathrm{\gamma(7.85,\572.014).}
$$
(Alternatively $d_{1}v_{1}\tau\sim\chi_{d_{1}}^{2}$ . That is $1144.025\tau\sim\chi_{15.7}^{2})$ .
(b) Marginal posterior for $\mu$ :
$$
{\frac{\mu-m_{1}}{\sqrt{v_{1}/c_{1}}}}={\frac{\mu-18.9295}{\sqrt{4.8569}}}\sim t_{15.7}
$$
(c) We can use R for the critical points of $t_{15.7}$ : 2.1232
qt(0.975,15.7)
$95\%$ interval: $-18.9295\pm2.1232{\sqrt{4.8569}}$ . That is
$$
-23.61<\mu<-14.25.
$$
(d) Comment, e.g., since zero is well outside the $95\%$ interval it seems clear that the drug reduces the blood pressure. | The following data come from the experiment reported by MacGregor et al. (1979). They give the supine systolic blood pressures (mm Hg) for fifteen patients with moderate essential hypertension. The measurements were taken immediately before and two hours after taking a drug.
<html><body><table><tr><td>Patient</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td><td>8</td></tr><tr><td>Before After</td><td>210 201</td><td>169 165</td><td>187 166</td><td>160 157</td><td>167 147</td><td>176 145</td><td>185 168</td><td>206 180</td></tr><tr><td>Patient</td><td>9</td><td>10</td><td>11</td><td>12</td><td>13</td><td>14</td><td>15</td><td></td></tr><tr><td>Before After</td><td>173 147</td><td>146 136</td><td>174 151</td><td>201 168</td><td>198 179</td><td>148 129</td><td>154 131</td><td></td></tr></table></body></html>
We are interested in the effect of the drug on blood pressure. We assume that, given parameters $\mu,~\tau$ , the changes in blood pressure, from before to after, in the $n$ patients are independent and normally distributed with unknown mean $\mu$ and unknown precision $\tau$ . The fifteen differences are as follows.
Our prior distribution for $\tau$ is a gamma(0.35, 1.01) distribution. Our conditional prior distribution for $\mu$ given $\tau$ is a normal $N(0,\ [0.003\tau]^{-1})$ distribution.
(a) Find the marginal posterior distribution of $\tau$ .
(b) Find the marginal posterior distribution of $\mu$ .
(c) Find the marginal posterior 95% hpd interval for $\mu$ .
(d) Comment on what you can conclude about the effect of the drug. |
(a) The likelihood is
$$
\begin{array}{r c l}{{{\cal L}}}&{{=}}&{{\displaystyle\prod_{i=1}^{n}2\rho^{2}t_{i}\exp[-(\rho t_{i})^{2}]}}\\ {{}}&{{=}}&{{\displaystyle2^{n}\rho^{2n}\left(\prod_{i=1}^{n}t_{i}\right)\exp[-\rho^{2}\sum_{i=1}^{n}t_{i}^{2}]}}\end{array}
$$
The log likelihood is
$$
l=n\log2+2n\log\rho+\sum_{i=1}^{n}\log(t_{i})-\rho^{2}\sum_{i=1}^{n}t_{i}^{2}.
$$
So
$$
\frac{\partial l}{\partial\rho}=\frac{2n}{\rho}-2\rho\sum_{i=1}^{n}t_{i}^{2}
$$
and, setting this equal to zero at the mode $\hat{\rho}$ , we find
$$
{\begin{array}{r c l}{n}&{=}&{{\hat{\rho}}^{2}\displaystyle\sum_{i=1}^{n}t_{i}^{2}}\\ {{\hat{\rho}}^{2}}&{=}&{\displaystyle\frac{n}{\sum t_{i}^{2}}}\\ {{\hat{\rho}}}&{=}&{\displaystyle{\sqrt{\frac{n}{\sum t_{i}^{2}}}}={\sqrt{\frac{300}{3161776}}}=0.0097408.}\end{array}}
$$
The second derivative is
$$
{\frac{\partial^{2}l}{\partial\rho^{2}}}=-{\frac{2n}{\rho^{2}}}-2\sum_{i=1}^{n}t_{i}^{2}
$$
so the posterior variance is approximately
$$
\frac{1}{2(n/\hat{\rho}^{2}+\sum t_{i}^{2})}=\frac{1}{2(\sum t_{i}^{2}+\sum t_{i}^{2})}=7.90695\times10^{-8}.
$$
Our $95\%$ hpd interval is therefore $0.0097408\pm1.96\sqrt{7.90695\times10^{-8}}$ . That is
$$
0.009190<\rho<0.010292.
$$
(b) The prior density is proportional to $\rho^{1}e^{-100\rho}$ so the log prior density is $\log\rho-100\rho$ plus a constant. The log posterior is therefore $g(\rho)$ plus a constant where
$$
g(\rho)=(2n+1)\log\rho-100\rho-\rho^{2}\sum_{i=1}^{n}t_{i}^{2}.
$$
So
$$
\frac{\partial g}{\partial\rho}=\frac{2n+1}{\rho}-100-2\rho\sum_{i=1}^{n}t_{i}^{2}.
$$
Setting this equal to zero at the mode $\hat{\rho}$ we find
$$
2\sum_{i=1}^{n}t_{i}^{2}\hat{\rho}^{2}+100\hat{\rho}-(2n+1)=0.
$$
This quadratic equation has two solutions but one is negative and $\rho$ must be positive so
$$
\hat{\rho}=\frac{-100+\sqrt{100^{2}+8(\sum t_{i}^{2})(2n+1)}}{4\sum t_{i}^{2}}=0.0097410.
$$
The second derivative is
$$
{\frac{\partial^{2}g}{\partial\rho^{2}}}=-{\frac{2n+1}{\rho^{2}}}-2\sum_{i=1}^{n}t_{i}^{2}
$$
so the posterior variance is approximately
$$
{\frac{1}{2[(n+1/2)/\hat{\rho}^{2}+\sum t_{i}^{2}]}}=7.900519\times10^{-8}.
$$
Our $95\%$ hpd interval is therefore $0.0097410\pm1.96\sqrt{7.900519\times10^{-8}}$ . That is
$$
0.009190<\rho<0.010292.
$$
So the prior makes no noticeable difference in this case. | The lifetimes of certain components are supposed to follow a Weibull distribution with known shape parameter $\alpha=2$ . The probability density function of the lifetime distribution is
$$
f(t)=\alpha\rho^{2}t\exp[-(\rho t)^{2}]
$$
for $0<t<\infty$ .
We will observe a sample of $n$ such lifetimes where $n$ is large.
(a) Assuming that the prior density is nonzero and reasonably flat so that it may be disregarded, find an approximation to the posterior distribution of $\rho$ . Find an approximate 95% hpd interval for $\rho$ when $n=300$ ,$\sum\log(t)=1305.165$ and $\sum t^{2}=3161776$ .
(b) Assuming that the prior distribution is a $\mathrm{gamma}(a,b)$ distribution, find an approximate $95\%$ hpd interval for $\rho$ , taking into account this prior, when $a=2$ ,$b=100$ ,$n=300$ ,$\sum\log(t)=1305.165$ and $\sum t^{2}=3161776$ . |
(a) $\lambda\sim\mathrm{gamma}(5,1)$ so $2\lambda\sim\mathrm{gamma}(5,1/2)$ , i.e. gamma(10/2, 1/2), i.e. $\chi_{10}^{2}$ .From tables, $95\%$ interval, $3.247<2\lambda<20.48$ . That is
$$
\underline{{1.6235}}<\lambda<10.24
$$
(b) Prior density prop. to $\lambda^{5-1}e^{-\lambda}$ .
Likelihood
$$
L=\prod_{i=1}^{45}\frac{e^{-\lambda}\lambda^{x_{i}}}{x_{i}!}=\frac{e^{-45\lambda}\lambda^{\sum_{i}}}{\prod x_{i}!}\propto e^{-45\lambda}\lambda^{182}.
$$
Posterior density prop. to $\lambda^{187-1}e^{-46\lambda}$ . This is a
distribution.
(c) Posterior mean:
$$
\frac{147}{46}=4.0652
$$
Posterior variance:
$$
\frac{187}{46^{2}}=0.088774
$$
Posterior sd:
$$
\sqrt{\frac{187}{46^{2}}}=0.29728
$$
$95\%$ interval $4.0652\pm1.96\times0.29728$ . That is
$$
3.4826<\lambda<4.6479
$$
(d) Joint prob. of $\lambda,\;X=m$ :
$$
{\frac{46^{187}}{\Gamma(187)}}\lambda^{187-1}e^{-46\lambda}{\frac{\lambda^{m}e^{-\lambda}}{m!}}={\frac{46^{187}}{\Gamma(187)}}{\frac{\Gamma(187+m)}{47^{187+m}}}{\frac{1}{m!}}{\frac{47^{187+m}}{\Gamma(187+m)}}\lambda^{187+m-1}e^{-47\lambda}
$$
Integrate out $\lambda$ :
$$
{\begin{array}{r c l}{\operatorname*{Pr}(X=m)}&{=}&{{\frac{46^{187}}{47^{187+m}}}{\frac{\Gamma(187+m)}{\Gamma(187)m!}}}\\ &{=}&{{\frac{\left(186+m\right)!}{186!m!}}\left({\frac{46}{47}}\right)^{187}\left({\frac{1}{47}}\right)^{m}}\\ &{=}&{\left({\begin{array}{c}{186+m}\\ {m}\end{array}}\right)\left({\frac{46}{47}}\right)^{187}\left({\frac{1}{47}}\right)^{m}}\end{array}}
$$
(e) Joint probability (density) of $\theta$ ,$X=m$ :
$$
0.05e^{-0.05\theta}\frac{\theta^{m}e^{-\theta}}{m!}=\frac{0.05}{m!}\frac{\Gamma(1+m)}{1.05^{m+1}}\frac{1.05^{m+1}}{\Gamma(1+m)}\theta^{m+1-1}e^{-1.05\theta}
$$
Integrate out $\theta$ :
$$
\operatorname*{Pr}(X=m)={\frac{0.05}{1.05^{m+1}}}{\frac{\Gamma(1+m)}{m!}}=\left({\frac{0.05}{1.05}}\right)\left({\frac{1}{1.05}}\right)^{m}
$$
Log posterior probs:
“Ordinary”:
$$
\begin{array}{r c l}{\log(P_{1})}&{=}&{\log[\Gamma(187+10)]-\log[\Gamma(187)]-\log[\Gamma(11)]+187\log(46/47)+10\log(1/47)}\\ &{=}&{\log[\Gamma(197)]-\log[\Gamma(187)]-\log(\Gamma(11)]+187\log(46)-197\log(47)}\\ &{=}&{-5.079796}\end{array}
$$
> lgamma(197) - lgamma(187) - lgamma(11) + 187\*log(46) - 197\*log(47) [1] -5.079796
“Type 2”:
$$
\begin{array}{r}{\begin{array}{r c l}{\log(P_{2})}&{=}&{\log(0.05/1.05)+10\log(1/1.05)}\\ &{=}&{\log(0.05)-11\log(1.05)}\\ &{=}&{-3.532424}\end{array}}\end{array}
$$
Hence the predictive probabilities are as follows.
$$
{\begin{array}{l l}{{\mathrm{^{\circ}O r d i n a r y^{\circ}}}\colon}&{P_{1}=\exp(-5.079796)=0.006221178}\\ {{\mathrm{^{\circ}T y p e~2^{\circ}}}\colon}&{P_{2}=\exp(-3.532424)=0.02923396}\end{array}}
$$
Hence the posterior probability that this is an ordinary customer is
$$
\frac{9\times0.006221178}{9\times0.006221178+1\times0.02923396}=\underline{{0.65698}}
$$ | Given the value of $\lambda$ , the number $X_{i}$ of transactions made by customer $i$ at an online store in a year has a $\mathrm{Poisson}(\lambda)$ distribution, with $X_{i}$ independent of $X_{j}$ for $i\neq j$ . The value of $\lambda$ is unknown. Our prior distribution for $\lambda$ is a gamma(5,1) distribution.
We observe the numbers of transactions in a year for 45 customers and
$$
\sum_{i=1}^{45}x_{i}=182.
$$
(a) Using a $\chi^{2}$ table (i.e. without a computer) find the lower $2.5\%$ point and the upper $2.5\%$ point of the prior distribution of $\lambda$ .(These bound a $95\%$ symmetric prior credible interval).
(b) Find the posterior distribution of $\lambda$ .
c) Using a normal approximation to the posterior distribution, based on the posterior mean and variance, find a $95\%$ symmetric posterior credible interval for $\lambda$ .
(d) Find an expression for the posterior predictive probability that a customer makes ${\boldsymbol{r}}n$ transactions in a year.
(e) As well as these “ordinary customers,” we believe that there is a second group of individuals. The number of transactions in a year for a member of this second group has, given $\theta$ , a Poisson $(\theta)$ distribution and our beliefs about the value of $\theta$ are represented by a gamma(1,0.05) distribution. A new individual is observed who makes 10 transactions in a year. Given that our prior probability that this is an ordinary customer is 0.9, find our posterior probability that this is an ordinary customer. Hint: You may find it best to calculate the logarithms of the predictive probabilities before exponentiating these. For this you might find the R function lgamma useful. It calculates the log of the gamma function. Alternatively it is possible to do the calculation using the R function dnbinom.
(N.B. In reality a slightly more complicated model is used in this type of application). |
Prior:
$$
\tau\sim\mathrm{gamma}\left(\frac{4}{2},\ \frac{18}{2}\right)\quad\mathrm{so}\quad d_{0}=4,\ d_{0}v_{0}=18,\ v_{0}=4.5.
$$
$$
\mu\mid\tau\sim N(500,~(0.005\tau)^{-1})\quad\mathrm{so}\quad m_{0}=500,~c_{0}=0.005.
$$
Data:
$$
\sum y=9857,\;\;\;\;\;n=20,\;\;\;\;\bar{y}=\frac{9857}{20}=492.85
$$
$$
\sum y^{2}=4858467,\quad s_{n}^{2}={\frac{1}{n}}\left\{\sum y^{2}-n\bar{y}^{2}\right\}={\frac{444.55}{20}}=22.2275
$$
Posterior:
$$
{\begin{array}{r c l}{c_{1}}&{=}&{c_{0}+n=20.005}\\ {m_{1}}&{=}&{{\frac{c_{0}m_{0}+n{\bar{y}}}{c_{0}+n}}=492.8518}\\ {d_{1}}&{=}&{d_{0}+n=24}\\ {r^{2}}&{=}&{({\bar{y}}-m_{0})^{2}+s_{n}^{2}=73.35}\\ {v_{d}}&{=}&{{\frac{c_{0}r^{2}+n s_{n}^{2}}{c_{0}+n}}=22.2403}\\ {v_{1}}&{=}&{{\frac{d_{0}v_{0}+n v_{d}}{d_{0}+n}}=19.2836}\end{array}}
$$
(a)
(2 marks)
$$
\frac{\mu-492.8518}{\sqrt{19.2836/20.005}}\sim t_{24}
$$
$$
{\begin{array}{l l l}{\operatorname*{Pr}(\mu<495)}&{=}&{\operatorname*{Pr}\left({\frac{\mu-492.8518}{\sqrt{19.2836/20.005}}}<{\frac{495-492.8518}{\sqrt{19.2836/20.005}}}\right)}\\ &{=}&{\operatorname*{Pr}(t_{24}<2.1990)=\underline{{0.9807}}}\end{array}}
$$
(Eg. use R: pt(2.1880,24)
(b)
(3 marks)
$$
c_{p}=\frac{c_{1}}{c_{1}+1}=\frac{20.005}{21.005}=0.9524
$$
$$
\frac{Y-492.8518}{\sqrt{19.2836/0.9524}}\sim t_{24}
$$
$$
\begin{array}{l l l}{\operatorname*{Pr}(Y<500)}&{=}&{\operatorname*{Pr}\left({\frac{Y-492.8518}{\sqrt{19.2836/0.9524}}}<{\frac{500-492.8518}{\sqrt{19.2836/0.9524}}}\right)}\\ &{=}&{\operatorname*{Pr}(t_{24}<1.5886)=\underline{{0.9374}}}\end{array}
$$
(Eg. use R: pt(1.5886,24) | The amounts of rice, by weight, in 20 nominally 500g packets are determined. The weights, in $\mathrm{g}$ , are as follows.
496 506 495 491 488 492 482 495 493 496
487 490 493 495 492 498 491 493 495 489
Assume that, given the values of parameters $\mu,\ \tau$ , the weights are independent and each has a normal $N(\mu,\,\tau)$ distribution.
The values of $\mu$ and $\tau$ are unknown. Our prior distribution is as follows. We have a gamma(2, 9) prior distribution for $\tau$ and a $N(500,\ (0.005\tau)^{-1})$ conditional prior distribution for $\mu$ given $\tau$ .
(a) Find the posterior probability that $\mu<495$ .
(b) Find the posterior predictive probability that a new packet of rice will contain less than 500g of rice |
(a) Likelihood:
$$
L=\prod_{i=1}^{8}\frac{e^{-\lambda_{i}}\lambda_{i}^{y_{i}}}{y_{i}!}
$$
Log likelihood:
$$
\begin{array}{r c l}{l}&{=}&{-\displaystyle\sum\lambda_{i}+\sum y_{i}\log\lambda_{i}-\sum\log(y_{i}!)}\\ &{=}&{\displaystyle-\sum\lambda_{i}+\sum y_{i}(\alpha+\beta t_{i})-\sum\log(y_{i}!)}\end{array}
$$
Derivatives:
$$
\begin{array}{r c l}{\displaystyle\frac{\partial\lambda_{i}}{\partial\alpha}}&{=}&{\displaystyle\frac{\partial}{\partial\alpha}e^{\alpha+\beta t_{i}}=\lambda_{i}}\\ {\displaystyle\frac{\partial\lambda_{i}}{\partial\beta}}&{=}&{\displaystyle\frac{\partial}{\partial\beta}e^{\alpha+\beta t_{i}}=\lambda_{i}t_{i}}\\ {\displaystyle\frac{\partial l}{\partial\alpha}}&{=}&{\displaystyle-\sum\frac{\partial\lambda_{i}}{\partial\alpha}+\sum y_{i}=-\sum\lambda_{i}+\sum y_{i}=-\sum(\lambda_{i}-y_{i})}\\ {\displaystyle\frac{\partial l}{\partial\beta}}&{=}&{\displaystyle-\sum\frac{\partial\lambda_{i}}{\partial\beta}+\sum y_{i}t_{i}=-\sum\lambda_{i}t_{i}+\sum y_{i}t_{i}=-\sum t_{i}(\lambda_{i}-y_{i})}\end{array}
$$
At the maximum
$$
\frac{\partial l}{\partial\alpha}=\frac{\partial l}{\partial\beta}=0.
$$
Hence $\hat{\alpha}$ and $\hat{\beta}$ satisfy the given equations. Calculations in R:
> y<-c(10,13,24,17,20,22,20,23)
> t<-seq(3,24,3)
> lambda<-exp(2.552+0.02638\*t)
> sum(lambda-y)
[1] 0.001572513
> sum(t\*(lambda-y))
[1] -0.003254096
These values seem close to zero but let us try a small change to the parameter values:
> lambda<-exp(2.55+0.0264\*t)
> sum(lambda-y)
[1] -0.2522928
> sum(t\*(lambda-y))
[1] -3.606993
The results are now much further than zero suggesting that the given values are very close to the solutions.
(b) Second derivatives:
$$
{\begin{array}{r c l}{{\displaystyle{\frac{\partial^{2}l}{\partial\alpha^{2}}}}}&{=}&{-\sum{\displaystyle{\frac{\partial\lambda_{i}}{\partial\alpha}}}=-\sum\lambda_{i}}\\ {{\displaystyle{\frac{\partial^{2}l}{\partial\beta^{2}}}}}&{=}&{-\sum t_{i}{\displaystyle{\frac{\partial\lambda_{i}}{\partial\alpha}}}=-\sum t_{i}^{2}\lambda_{i}}\\ {{\displaystyle{\frac{\partial^{2}l}{\partial\alpha\partial\beta}}}}&{=}&{-\sum{\displaystyle{\frac{\partial\lambda_{i}}{\partial\beta}}}=-\sum t_{i}\lambda_{i}}\end{array}}
$$
Variance matrix:
$$
V=-\left(\begin{array}{c c}{{\frac{\partial^{2}l}{\partial\alpha^{2}}}}&{{\frac{\partial^{2}l}{\partial\alpha\partial\beta}}}\\ {{\frac{\partial^{2}l}{\partial\alpha\partial\beta}}}&{{\frac{\partial^{2}l}{\partial\beta^{2}}}}\end{array}\right)
$$
Numerically using R:
>lambda<-exp(2.552+0.02638\*t)
> d2<-matrix(nrow=2,ncol=2)
> d2[1,1]<- -sum(lambda)
> d2[1,2]<- - sum(t\*lambda)
> d2[2,1]<- - sum(t\*lambda)
> d2[2,2]<- - sum((t^2)\*lambda)
> V<- - solve(d2)
> V
[,1] [,2] [1,] 0.038194535 -0.0021361807 [2,] -0.002136181 0.0001449430 The mean of $\alpha+24\beta$ is $2.552+24\times0.02638=3.18512$ .
The variance is $0.038194535+24^{2}*0.0001449430+2\times1\times24\times(-0.0021361807)=$ 0.01914501.
Alternative matrix-based calculation in R:
> dim(m)<-c(1,2) > v<-m%\*%V%\*%t(m) > v [,1] [1,] 0.01914501 The approximate $95\%$ interval is
$$
3.18512\pm1.96{\sqrt{0.01914501}}
$$
That is
$2.9139<\alpha+24\beta<3.4563$
(5 marks)
(c) The interval for $\lambda_{24}$ is
$$
e^{2.9139}<e^{\alpha+24\beta}<e^{3.4563}.
$$
That is
$18.429<\lambda_{24}<31.700$ | A machine which is used in a manufacturing process jams from time to time. It is thought that the frequency of jams might change over time as the machine becomes older. Once every three months the number of jams in a day is counted. The results are as follows.
$$
{\begin{array}{l}{{\mathrm{Observation~}}i}\\ {{\mathrm{Age~of~machine~}}t_{i}\ {\mathrm{(months)}}}\\ {{\mathrm{Number~of~jams~}}y_{i}}\end{array}}\left|{\begin{array}{l l l l l l l l l}{{1}}&{{2}}&{{3}}&{{4}}&{{5}}&{{6}}&{{7}}&{{8}}\\ {{3}}&{{6}}&{{9}}&{{12}}&{{15}}&{{18}}&{{21}}&{{24}}\\ {{10}}&{{13}}&{{24}}&{{17}}&{{20}}&{{22}}&{{20}}&{{23}}\end{array}}\right|
$$
Our model is as follows. Given the values of two parameters $\alpha,~\beta$ , the number of jams $y_{i}$ on a dat when the machine has age $t_{i}$ months has a Poisson distribution
$$
y_{i}\sim\operatorname{Poisson}(\lambda_{i})
$$
where
$$
\log_{e}(\lambda_{i})=\alpha+\beta t_{i}.
$$
Assume that the effect of our prior distribution on the posterior distribution is negligible and that large-sample approximations may be used.
(a) Let the values of $\alpha$ and $\beta$ which maximise the likelihood be $\hat{\alpha}$ and $\hat{\beta}$ . Assuming that the likelihood is differentiable at its maximum, show that these satisfy the following two equations
$$
\begin{array}{r c l}{{\displaystyle\sum_{i=1}^{8}(\hat{\lambda}_{i}-y_{i})}}&{{=}}&{{0}}\\ {{\displaystyle\sum_{i=1}^{8}t_{i}(\hat{\lambda}_{i}-y_{i})}}&{{=}}&{{0}}\end{array}
$$
where
$$
\log_{e}(\hat{\lambda}_{i})=\hat{\alpha}+\hat{\beta}t_{i}
$$
and show that these equations are satisfied (to a good approximation) by
$$
\hat{\alpha}=2.552\;\;\;\;\;\mathrm{and}\;\;\;\;\;\hat{\beta}=0.02638.
$$
(You may use R to help with the calculations, but show your commands).
You may assume from now on that these values maximise the likelihood.
(b) Find an approximate symmetric 95% posterior interval for $\alpha+24\beta$ .(c) Find an approximate symmetric 95% posterior interval for $\exp(\alpha+24\beta)$ , the mean jam-rate per day at age 24 months.
(You may use R to help with the calculations, but show your commands). |
README.md exists but content is empty.
- Downloads last month
- 36