Introduction
In this last part, we will continue looking at Brownian motions and Gaussian processes. We will cover some applications of what we have learned so far.
Example: Counting Zeroes
Example: Counting Zeroes
How many zeroes does $B_t$ have in the interval $(0, s)$?
If $L_s$ is the last zero in $(0, s)$, we saw that, $$ \frac{L_s}{s} \sim \mathrm{Beta}\left(\frac{1}{2}, \frac{1}{2}\right), $$ which means that, with probability one, there exists a zero in the interval $(0, s)$ and for the last zero $L_s$ we have $0 < L_s < s$.
Repeating the argument, we have that $0 < L_{L_s} < L_s$ with probability one, and thus there exists another zero in $(0, L_s) \subset (0, s)$.
The conclusion is that, with probability one, there is an infinite number of zeroes in $(0, s)$ for any $s > 0$.
Stock Options
Intuition: Stock Options
A (European) stock option is a right (but not an obligation) to buy a stock at a given time $t$ in the future for a given price $K$.
How much can you expect to earn from a stock option at that future time?
We get that (we will derive this), $$ \mathbb{E}[\max(G_t - K, 0)] = G_0 e^{t(\mu + \frac{\sigma^2}{2})} P\left(B_1 > \frac{\beta - \sigma t}{\sqrt{t}}\right) - K P\left(B_1 > \frac{\beta}{\sqrt{t}}\right), $$ where $\beta = \frac{\log(K / G_0) - \mu t}{\sigma}$.
Derivation
Firstly, we need to prove the algebraic identity, $$ e^{\sigma x} \mathcal{N}(x; 0, t) = e^{\frac{\sigma^2 t}{2}} \mathcal{N}(x; \sigma t, t). $$
Proof
We have, $$ \begin{align*} e^{\sigma x} \mathcal{N}(x; 0, t) & = \frac{1}{\sqrt{2 \pi t}} \exp\left(-\frac{x^2}{2 t} + \sigma x\right) \newline & = \frac{1}{\sqrt{2 \pi t}} \exp\left(-\frac{(x - \sigma t)^2}{2 t} + \frac{\sigma^2 t}{2}\right) \newline & = e^{\frac{\sigma^2 t}{2}} \mathcal{N}(x; \sigma t, t). \ _\blacksquare \end{align*} $$ Then, defining $\beta = \frac{\log(K / G_0) - \mu t}{\sigma}$ we get, $$ \begin{align*} \mathbb{E}[\max(G_t - K, 0)] & \coloneqq \mathbb{E}[\max(G_0 e^{\mu t + \sigma B_t} - K, 0)] \newline & = \int_{-\infty}^{\infty} \max(G_0 e^{\mu t + \sigma x} - K, 0) \ \mathcal{N}(x; 0, t) \ dx \newline & = \int_{\beta}^{\infty} (G_0 e^{\mu t + \sigma x} - K) \ \mathcal{N}(x; 0, t) \ dx \newline & = G_0 e^{\mu t} \int_{\beta}^{\infty} e^{\sigma x} \ \mathcal{N}(x; 0, t) \ dx - K \int_{\beta}^{\infty} \mathcal{N}(x; 0, t) \ dx \newline & = G_0 e^{t(\mu + \frac{\sigma^2}{2})} \int_{\beta}^{\infty} \mathcal{N}(x; \sigma t, t) \ dx - K \int_{\beta}^{\infty} \mathcal{N}(x; 0, t) \ dx \newline & = G_0 e^{t(\mu + \frac{\sigma^2}{2})} P\left(B_1 > \frac{\beta - \sigma t}{\sqrt{t}}\right) - K P\left(B_1 > \frac{\beta}{\sqrt{t}}\right). \ _\blacksquare \end{align*} $$
Example
A stock price is modeled with $G_0 = 67.3$, $\mu = 0.08$, and $\sigma = 0.3$. What is the expected payoff from an option to buy the stock at 100 in 3 years?
Solution
We want to compute,
$$
\begin{align*}
\mathbb{E}[\max(G_3 - 100, 0)] & = 67.3 e^{3(0.08 + \frac{0.3^2}{2})} P\left(B_1 > \frac{\beta - 0.3 \cdot 3}{\sqrt{3}}\right) - 100 P\left(B_1 > \frac{\beta}{\sqrt{3}}\right) \newline
\end{align*}
$$
where,
$$
\beta = \frac{\log(100 / 67.3) - 0.08 \cdot 3}{0.3} \approx 1.213.
$$
Thus, we can compute this as 67.3 * exp(3 * (0.08 + 0.3^2 / 2)) * (1 - pnorm((1.213 - 0.3 * 3) / sqrt(3), mean = 0, sd = 1)) - 100 * (1 - pnorm(1.213 / sqrt(3), mean = 0, sd = 1)) in R.
Martingales
Definition: Martingale
A stochastic process $\{Y_t\}_{t \geq 0}$ is a martingale if for $t \geq 0$,
- $\mathbb{E}[Y_t \mid Y_r, 0 \leq r \leq s] = Y_s$ for $0 \leq s \leq t$.
- $\mathbb{E}[|Y_t|] < \infty$. A Brownian motion $B_t$ is a martingale.
Proof
We have, $$ \begin{align*} \mathbb{E}[B_t \mid B_r, 0 \leq r \leq s] & = \mathbb{E}[B_t - B_s + B_s \mid B_r, 0 \leq r \leq s] \newline & = \mathbb{E}[B_t - B_s \mid B_r, 0 \leq r \leq s] + \mathbb{E}[B_s \mid B_r, 0 \leq r \leq s] \newline & = 0 + B_s \newline & = B_s. \end{align*} $$ and, $$ \begin{align*} \mathbb{E}[|B_t|] & = \int_{-\infty}^{\infty} |x| \ \mathcal{N}(x; 0, t) \ dx \newline & = 2 \int_{0}^{\infty} x \ \mathcal{N}(x; 0, t) \ dx \newline & = 2 \int_{0}^{\infty} x \ \frac{1}{\sqrt{2 \pi t}} \exp\left(-\frac{x^2}{2 t}\right) \ dx \newline & = 2 \cdot \frac{1}{\sqrt{2 \pi t}} \cdot t \newline & = \sqrt{\frac{2 t}{\pi}} < \infty. \newline \end{align*} $$
Further, one can define a martingale with respect to other stochastic processes.
$\{Y_t\}_{t \geq 0}$ is a martingale with respect to $\{X_t\}_{t \geq 0}$ if for $t \geq 0$,
- $\mathbb{E}[Y_t \mid X_r, 0 \leq r \leq s] = Y_s$ for $0 \leq s \leq t$.
- $\mathbb{E}[|Y_t|] < \infty$.
Example
Let $Y_t \coloneqq B_t^2 - t$ for $t \geq 0$. Then, $\{Y_t\}_{t \geq 0}$ is a martingale with respect to the Brownian motion $\{B_t\}_{t \geq 0}$.
Proof
We have, $$ \begin{align*} \mathbb{E}[Y_t \mid B_r, 0 \leq r \leq s] & = \mathbb{E}[B_t^2 - t \mid B_r, 0 \leq r \leq s] \newline & = \mathbb{E}[(B_t - B_s + B_s)^2 - t \mid B_r, 0 \leq r \leq s] \newline & = \mathbb{E}[(B_t - B_s)^2 \mid B_r, 0 \leq r \leq s] + 2 B_s \mathbb{E}[B_t - B_s \mid B_r, 0 \leq r \leq s] + B_s^2 - t \newline & = (t - s) + 0 + B_s^2 - t \newline & = B_s^2 - s \newline & = Y_s. \end{align*} $$ Further, $$ \begin{align*} \mathbb{E}[|Y_t|] & = \mathbb{E}[|B_t^2 - t|] \newline & \leq \mathbb{E}[B_t^2] + t \newline & = \mathrm{Var}(B_t) + (\mathbb{E}[B_t])^2 + t \newline & = t + 0 + t \newline & = 2 t < \infty. \newline \end{align*} $$ Thus, $\{Y_t\}_{t \geq 0}$ is a martingale with respect to $\{B_t\}_{t \geq 0}$. $_\blacksquare$
Derivation: Geometric Brownian Motion can be a Martingale
Let $G_t \coloneqq G_0 e^{\mu t + \sigma B_t}$ be a geometric Brownian motion. We can derive that, $$ \begin{align*} \mathbb{E}[G_t \mid B_r, 0 \leq r \leq s] & \coloneqq \mathbb{E}[G_0 e^{\mu t + \sigma B_t} \mid B_r, 0 \leq r \leq s] \newline & = \mathbb{E}[G_0 e^{\mu(t - s) + \sigma (B_t - B_s) + \mu s + \sigma B_s} e^{\mu s + \sigma B_s} \mid B_r, 0 \leq r \leq s] \newline & = \mathbb{E}[G_{t - s}] e^{\mu s + \sigma B_s} \newline & = G_0 e^{(t - s)(\mu + \frac{\sigma^2}{2})} e^{\mu s + \sigma B_s}. \newline & = G_s e^{(t - s)(\mu + \frac{\sigma^2}{2})}. \newline \end{align*} $$ We see that $G_t$ is a martingale with respect to $B_t$ if and only if $\mu + \frac{\sigma^2}{2} = 0$, i.e., $\mu = -\frac{\sigma^2}{2}$.
Example: Discounting Future Values of Stocks & The Black-Scholes Formula for Option Pricing
When making investments, there is always a range of choices, some of which are sometimes called “risk-free”. Such investments may pay a fixed interest.
When interests are compounded frequently, a reasonable model is that an investment of $G_0$ has a value $G_0 e^{rt}$ after time $t$, where $r$ is the “risk-free” investment rate of return.
A common way to take this alternative into account is to instead “discount” all other investments with the factor $e^{-rt}$.
For example, the discounted value of a stock can be modeled as, $$ e^{-rt} G_t = e^{-rt} G_0 e^{\mu t + \sigma B_t} = G_0 e^{(\mu - r) t + \sigma B_t}. $$ A possible assumption about the trend $\mu$ of a stock price is that the discounted value behaves as a martingale with respect to the Brownian motion $B_t$.
Thus, we get, $$ \mu - r + \frac{\sigma^2}{2} = 0, \quad \text{i.e.,} \quad \mu = r - \frac{\sigma^2}{2}. $$
The Black-Scholes formula for option pricing is based on,
- Assuming the discounted stock price is a martingale with respect to the Brownian motion.
- Using discounting when computing the value of the option.
From this, we get, $$ e^{-rt} \mathbb{E}[\max(G_t - K, 0)] = G_0 P\left(B_1 > \frac{\beta - \sigma t}{\sqrt{t}}\right) - K e^{-rt} P\left(B_1 > \frac{\beta}{\sqrt{t}}\right), $$ where $\beta = \frac{\log(K / G_0) - (r - \frac{\sigma^2}{2}) t}{\sigma}$.