Overview

Part 13 - Brownian Motion and Gaussian Processes I

MVE550
Date: December 11, 2025
Last modified: December 12, 2025
12 min read
MVE550_13

Introduction

In this part, we will introduce Brownian motion and Gaussian processes, which are fundamental concepts in stochastic processes and have wide applications in various fields such as physics, finance, and machine learning.

So far, we have looked at discrete-time discrete state space processes (Discrete Markov chains and Branching processes). Then we moved to discrete-time continuous state space processes (in connection to MCMC). Most recently, we studied continuous-time discrete state space processes (Poisson processes and more generally Continuous-time Markov chains).

Now, we will start looking at continuous-time continuous state space processes.

Brownian Motion

Intuition: Brownian Motion

In a gas, atoms bump into each other and change course randomly. Over time, how does a single atom move, on average? If $f(x, t)$ represents the probability density for the position $x$ of an atom at time $t$ moving along a line, Albert Einstein showd that 1, $$ \frac{\partial f}{\partial t} = \frac{1}{2} \frac{\partial^2 f}{\partial x^2}. $$ Which has the solution, $$ f(x, t) = \frac{1}{\sqrt{2 \pi t}} e^{-\frac{x^2}{2t}}. $$ This means that the position of the atom at time $t$ is normally distributed with mean $0$ and variance $t$, i.e., $X_t \sim \mathcal{N}(0, t)$.

These random movements are called Brownian motion (or Wiener process).

Definition: Brownian Motion

Brownian motion is a continuous-time stochastic process $\{B_t\}_{t \geq 0}$ with the following properties:

  1. $B_0 = 0$.

  2. For $t >0$, $B_t \sim \mathcal{N}(0, t)$ (i.e., normally distributed with mean $0$ and variance $t$).

  3. For $s, t > 0$, $B_{t + s} - B_s \sim \mathcal{N}(0, t)$ (i.e., the increments are stationary).

  4. For $0 \leq q < r \leq s < t$, $B_t - B_s$ is independent of $B_r - B_q$ (i.e., the increments are independent).

  5. The function $t \mapsto B_t$ is continuous with probability $1$ (almost surely).

Intuition: Simulation of Brownian Motion

Given time points $0 \eqqcolon t_0 < t_1 < t_2 < \ldots < t_n$, we write for $i > 0$, $$ B_{t_i} = B_{t_{i - 1}} + (B_{t_i} - B_{t_{i - 1}}) = B_{t_{i - 1}} + Z_i, $$ where $Z_i \sim \mathcal{N}(0, t_i - t_{i - 1})$.

Thus, we get for independent $Z_1, Z_2, \ldots, Z_n$, $$ B_{t_n} = \sum_{i = 1}^{n} Z_i. $$

This gives a simple way to simulate Brownian motion at discrete time points. A good way to simulate the path $t \mapsto B_t$ on $t \in [0, a]$ is to set $t_i = \frac{a_i}{n}$, simulate independently, $$ Z_i \sim \mathcal{N}\left(0, \frac{a}{n}\right), $$ and compute, $$ B_{t_i} = \sum_{j = 1}^{i} Z_j, $$

Note

We could also write $Z_i = \sqrt{\frac{a}{n}} Y_i$ where $Y_i \sim \mathcal{N}(0, 1)$ are standard normal random variables.

Intuition: Fractal Nature of Brownian Motion

What if we have a Brownian motion path simulated as above, and want to plot it at twice the detail (i.e., at $2n$ points instead of $n$)? We have, $$ \begin{align*} B_{t_{i + \frac{a}{2n}} - B_{t_i}} & = Z_{i0} \sim \mathcal{N}\left(0, \frac{a}{2n}\right) \newline B_{t_{i + 1} - B_{t_{i + \frac{a}{2n}}}} & = Z_{i1} \sim \mathcal{N}\left(0, \frac{a}{2n}\right). \newline \end{align*} $$ Reformulating using $Z_i = B_{t_{i + 1}} - B_{t_i}$, we have, $$ Z_{i0} \sim \mathcal{N}\left(0, \frac{a}{2n}\right), \quad Z_i \mid Z_{i0} \sim \mathcal{N}\left(Z_{i0}, \frac{a}{2n}\right). $$ Since we have Normal-Normal conjugacy, $$ Z_{i0} \mid Z_i \sim \mathcal{N}\left(\frac{1}{2} Z_i, \frac{a}{4n}\right). $$ So we get $B_{t_{i + \frac{a}{2n}}}$ by simulating $Z_{i0}$ given $Z_i$ as above and adding the result to $B_{t_i}$.

Intuition: Computations with Brownian Motion

To compute probabilities for Brownian motions, we generally use the properties of the Normal distribution and the independent increments.

Example: Probability Computation with Brownian Motion

Show that $B_1 + B_3 + 2 B_7 \sim \mathcal{N}(0, 50)$.

Solution

We can write, $$ \begin{align*} B_1 + B_3 + 2 B_7 & = B_1 + (B_3 - B_1) + B_1 + 2 (B_7 - B_3) + 2 B_3 \newline & = 2 B_1 + (B_3 - B_1) + 2 (B_7 - B_3) + 2(B_3 - B_1) + 2 B_1 \newline & = 4 B_1 + 2 (B_3 - B_1) + 2 (B_7 - B_3). \end{align*} $$ Looking at each term separately, we have, $$ \begin{align*} \mathbb{E}[4 B_1] & = 0 \newline \mathrm{Var}(4 B_1) & = 4^2 \mathrm{Var}(B_1) = 4^2 \end{align*} $$ and, $$ \begin{align*} \mathbb{E}[2 (B_3 - B_1)] & = 0 \newline \mathrm{Var}(2 (B_3 - B_1)) & = 2^2 \mathrm{Var}(B_3 - B_1) = 2^2 \cdot 2 \end{align*} $$ and, $$ \begin{align*} \mathbb{E}[2 (B_7 - B_3)] & = 0 \newline \mathrm{Var}(2 (B_7 - B_3)) & = 2^2 \mathrm{Var}(B_7 - B_3) = 2^2 \cdot 4 \end{align*} $$ Since the increments are independent, we have, $$ \begin{align*} \mathbb{E}[B_1 + B_3 + 2 B_7] & = 0 \newline \mathrm{Var}(B_1 + B_3 + 2 B_7) & = 4^2 + 2^2 \cdot 2 + 2^2 \cdot 4 = 50. \newline \end{align*} $$ Thus, $B_1 + B_3 + 2 B_7 \sim \mathcal{N}(0, 50)$. $_\blacksquare$

Example: Conditional Distribution with Brownian Motion

Show that $P(B_2 > 0 \mid B_1 = 1) = 0.8413$.

Solution

We can rewrite the conditional probability as, $$ P(B_2 - B_1 > 0 - 1 \mid B_1 = 1). $$ Since we have independent increments, $B_2 - B_1$ is independent of $B_1$. $$ P(B_2 - B_1 > -1) $$ Further, $B_2 - B_1 \sim \mathcal{N}(0, 1)$. Thus, we have, $$ P(B_2 - B_1 > -1) = P\left(Z > -1\right) = 0.8413, $$ where $Z \sim \mathcal{N}(0, 1)$. $_\blacksquare$ (we can compute this with 1 - pnorm(-1, 0, 1) in R).

Example: Covariance Computation with Brownian Motion

Show that $\mathrm{Cov}(B_s, B_t) = \min(s, t)$.

Solution

Without loss of generality, assume that $s \leq t$. We can write, $$ \begin{align*} \mathrm{Cov}(B_s, B_t) & = \mathrm{Cov}(B_s, B_t - B_s + B_s) \newline & = \mathrm{Cov}(B_s, B_t - B_s) + \mathrm{Cov}(B_s, B_s) \newline & = 0 + \mathrm{Var}(B_s) \newline & = s \newline \end{align*} $$ Conversely, if $t \leq s$, we would get $\mathrm{Cov}(B_s, B_t) = t$. Thus, we have, $$ \mathrm{Cov}(B_s, B_t) = \min(s, t) \ _\blacksquare . $$

Intuition: Brownian Motion as Limit of Random Walks

A random walk is a discrete-time Markov chain $S_0, S_1, S_2, \ldots$ where $S_0 = 0$ and, $$ S_n = Y_1 + Y_2 + \ldots + Y_n, $$ and $Y_1, Y_2, \ldots$ are i.i.d. random variables. Assume $\mathbb{E}[Y_i] = 0$.

Further, if we assume $\mathrm{Var}(Y_i) = 1$, we get $\mathrm{Var}(S_n) = n$.

Interpolating between the values $S_n$ we can make this into a continuous-time process $S_t$ where $\mathrm{Var}(S_t) \approx t$.

We may scale with an $s > 0$ to get processes $S_t^{(s)} = \frac{S_{st}}{\sqrt{s}}$ where we get $\lim_{s \to \infty} \mathrm{Var}(S_t^{(s)}) = t$.

It turns out that the processes $S_t^{(s)}$ when $s \to \infty$ are exactly Brownian motion, no matter what type of $Y_i$ we start with.

This is the Donsker’s invariance principle, which is a generalization of the central limit theorem to stochastic processes 2.

Gaussian Processes

Recall: Multivariate Normal Distribution

A set of random variables $X_1, X_2, \ldots, X_k$ has a multivariate normal distribution if, for all real $a_1, a_2, \ldots, a_k$, $a_1 X_1 + a_2 X_2 + \ldots + a_k X_k$ is normally distributed.

It is completely determined by the expectation vector $\mu = (\mathbb{E}[X_1], \mathbb{E}[X_2], \ldots, \mathbb{E}[X_k])$ and the $(k \times k)$ covariance matrix $\Sigma$ where $\Sigma_{ij} = \mathrm{Cov}(X_i, X_j)$.

The joint density function on the vector $x = (x_1, x_2, \ldots, x_k)$ is given by, $$ \pi(x) = \frac{1}{|2 \pi \Sigma|^{1/2}} \exp\left(-\frac{1}{2} (x - \mu)^T \Sigma^{-1} (x - \mu)\right). $$ where $|2 \pi \Sigma|$ is the determinant of the matrix $2 \pi \Sigma$.

Note

All marginal distributions and all conditional distributions are also multivariate normal.

Definition: Gaussian Process

A Gaussian process is a continuous-time stochastic process $\{X_t\}_{t \geq 0}$ with the property that for all $n \geq 1$ and $0 \leq t_1 < t_2 < \ldots < t_n$, $X_{t_1}, X_{t_2}, \ldots, X_{t_n}$ have a multivariate normal distribution.

Thus, a Gaussian process is completely determined by its mean function $\mathbb{E}[X_t]$ and its covariance function $\mathrm{Cov}(X_s, X_t)$.

Intuition: Brownian Motion as a Gaussian Process

Brownian motion is a Gaussian process, as we can show that any $a_1 B_{t_1} + a_2 B_{t_2} + \ldots + a_n B_{t_n}$ is normally distributed.

A Gaussian process $\{X_t\}_{t \geq 0}$ is a Brownian motion if and only if,

  1. $X_0 = 0$.

  2. $\mathbb{E}[X_t] = 0$ for all $t$.

  3. $\mathrm{Cov}(X_s, X_t) = \min(s, t)$ for all $s, t$.

  4. The function $t \mapsto X_t$ is continuous with probability $1$ (almost surely).

Intuition: Transformations of Brownian Motion

The following transformations of Brownian motion also yield Brownian motion:

  • $\{-B_t\}_{t \geq 0}$, negating the process yields another (reflected) Brownian motion.
  • $\{B_{t + s} - B_s\}_{t \geq 0}$ for any fixed $s \geq 0$, shifting the time origin yields another Brownian motion.
  • $\{\frac{1}{\sqrt{a}} B_{a t}\}_{t \geq 0}$ for any fixed $a > 0$, scaling time and space yields another Brownian motion.
  • The process $\{X_t\}_{t \geq 0}$ where $X_0 = 0$ and $X_t = t B_{\frac{1}{t}}$ for $t > 0$, time inversion yields another Brownian motion.
Intuition: Stopping Times

We saw above that, for any fixed $t$ $(B_{t + s} - B_s)_{t \geq 0}$ is a Brownian motion. Does thius phenomenon also hold if we start the chain anew from $T$ when $T$ is random? It depends.

If $T$ is the largest value less than 1 where $B_T = 0$, then $(B_{T + s} - B_T)_{s \geq 0}$ is not a Brownian motion.

If $T$ is the smallest value where $B_T = a$ for some constant $a$, then $(B_{T + s} - B_T)_{s \geq 0}$ is a Brownian motion. The reason is that the event $T = t$ can be determined based on $B_r$ where $0 \leq r \leq t$.

Random $T$‘s that have this property are called stopping times. For these $B_{T + s} - B_T$ is a Brownian motion.

Intuition: The Distribution of the First Hitting Time

Given that $a \neq 0$, what is the distribution of the first hitting time $T_a = \min\{t : B_t = a\}$?

We will prove that, $$ \frac{1}{T_a} \sim \mathrm{Gamma}\left(\frac{1}{2}, \frac{a^2}{2}\right). $$

Assuming that $a > 0$ and using that $T_a$ is a stopping time we get for any $t > 0$ that $P(B_{\frac{1}{t}} > a \mid T_a < \frac{1}{t}) = P(B_{\frac{1}{t} - T_a} > 0) = \frac{1}{2}$.

We also have, $$ \begin{align*} P(B_{\frac{1}{t}} > a \mid T_a < \frac{1}{t}) & = \frac{P(B_{\frac{1}{t}} > a, T_a < \frac{1}{t})}{P(T_a < \frac{1}{t})} \newline & = \frac{P(B_{\frac{1}{t}} > a)}{P(T_a < \frac{1}{t})}. \end{align*} $$ Further, it follows that $P(T_a < \frac{1}{t}) = 2 P(B_{\frac{1}{t}} > a)$ and thus, $$ \begin{align*} P\left(\frac{1}{T_a} \leq t\right) & = 2 P\left(B_{\frac{1}{t}} > a\right) -1 \newline & = 2 P(B_1 \leq a \sqrt{t}) - 1 \newline \end{align*} $$ Taking the derivative with respect to $t$ we get the Gamma density, $$ \pi_{1/T_a}(t) = 2 \frac{1}{\sqrt{2 \pi}} \exp\left(-\frac{1}{2} \left(a \sqrt{t}\right)^2\right) \frac{a}{2} t^{-1/2}. $$

Intuition: Maximum of Brownian Motion

We can define $M_t \coloneqq \max_{0 \leq s \leq t} B_s$.

We may compute for $a > 0$ (using the results above), $$ \begin{align*} P(M_t > a) & = P(T_a < t) \newline & = 2 P(B_t > a) \newline & = P(|B_t| > a). \end{align*} $$ Thus, $M_t$ has the same d istribution as $|B_t|$, i.e., the absolute value of $B_t$.

Example

What is the probability that $M_3 > 5$?

Solution

We have, $$ \begin{align*} P(M_3 > 5) & = P(|B_3| > 5) \newline & = 2 P(B_3 > 5) \newline & = 2 P\left(Z > \frac{5}{\sqrt{3}}\right) \newline & = 0.046. \end{align*} $$ where $Z \sim \mathcal{N}(0, 1)$. $_\blacksquare$ (we can compute this with 2 * (1 - pnorm(5 / sqrt(3), 0, 1)) in R).

Example

Find $t$ such that $P(M_t \leq 4) = 0.9$.

Solution

We have, $$ \begin{align*} P(M_t \leq 4) & = P(|B_t| \leq 4) \newline & = 1 - 2 P(B_t > 4) \newline & = 1 - 2 P\left(Z > \frac{4}{\sqrt{t}}\right). \newline \end{align*} $$ Setting this equal to $0.9$ we get, $$ P\left(Z > \frac{4}{\sqrt{t}}\right) = 0.05. $$ Looking up in the standard normal table (or using qnorm(0.95, 0, 1) in R) we get $\frac{4}{\sqrt{t}} = 1.645$ and thus $t = 5.92$. $_\blacksquare$

Footnotes

  1. Wikipedia: Brownian motion

  2. Wikipedia: Donsker’s invariance principle