This post is a collection of notes about Random Walk and Brownian Motions as well as their properties.
This post on the random behavior of financial assets might be of interest before to go through this.
The probabilty of Random Walk
In the context of quantitative finance, the easiest way to model a random walk is to consider a head/tail type of example which constitutes a symmetric random walk.
Expected Returns
So let’s define a Discrete Random Variable \(R_i\) as the i-th toss of a coin. And \(R\) takes the value \(1\) if it’s a tail and \(-1\) if it’s a head.
The expectation of \(R_i\) is then: \(\mathbb{E}[R_i] = \frac{1}{2} \cdot 1 + \frac{1}{2} \cdot -1 = 0\)
The variance of \(R_i\) is then: \(\mathbb{V}ar[R_i] = \mathbb{E}[R_i^2] - \mathbb{E}[R_i]^2 = \left( \frac{1}{2} \cdot (1)^2 + \frac{1}{2} \cdot (-1)^2 \right) - 0^2 = 1\)
Expected Sum of Returns
Let’s now define another Discrete Random Variable \(S_i\) which is the sum of the returns after n toss. Hence, \(S_i = \sum_{i=1}^{n} R_i\) with an initial condition that \(S_0=0\). This initial condition not only makes sense (we did not win any money before to start the game) but it will also constitutes an important initial condition in the definition of Brownian Motion (or Wiener Process)
The expectation of \(S_i\) is then: \[\mathbb{E}[S_i] = \mathbb{E} \left[ \sum_{i=0}^{n} R_i \right] = \mathbb{E} \left[ R_0 + R_1 + \dots + R_n \right] = \mathbb{E}[R_0] + \mathbb{E}[R_1] + \dots + \mathbb{E}[R_n] = 0 + 0 + \dots + 0 = 0 \]
And the variance of \(S_i\) is then: \[\mathbb{V}ar[S_i] = \mathbb{E}[S_i^2] - \mathbb{E}[S_i]^2\]
Note how the second term \(\mathbb{E}[S_i]^2 = 0\)
\[\mathbb{V}ar[S_i] = \mathbb{E} \left[ \left( \sum_{i=0}^{n} R_i \right)^2 \right] = \mathbb{E} \left[ R_0^2 + R_1^2 + \dots + R_n^2 + 2R_1R_2 + 2R_1R_3 + \cdots \right] \tag{1}\]
Because the events are independents, \(\mathbb{E}[R_i R_j] = 0\). Hence all the double terms in Equation 1 above are equals to 0 and \[\mathbb{V}ar[S_i] = \mathbb{E}[R_0^2] + \mathbb{E}[R_1^2] + \dots + \mathbb{E}[R_n^2] = 1 + 1 + \dots + 1 = n \cdot 1 = n \tag{2}\]
The Markov Property
Imagine, we already threw the coin a few times, let’s say 10 times. Now, we have some additional information as we know \(S_{10}\). We could ask ourselves the question what is the expectation for the next toss considering that additional information. It would make sense to say that we will get no more no less that what we already have since \(\mathbb{E}[R_i] = 0\). Mathematically, we can write this: \[\mathbb{E}\left[ S_{i+1} | R_1, R_2, \dots, R_i \right] = S_i\]
The expected value of \(S_i\) is only dependent of the previous value \(s_{i-1}\) and not of any previous values (no memory beyond the one prior value)
The idea of the Markov Property is to say that the Random Walk Stochastic Process has no memory beyond the point where it (the walker) is now.
The Martingale Property
The conditional probability of any point in the future is what you already have.
And actually, we can generalize this further and state that: \[\mathbb{E}\left[ S_i | S_j \space \forall i \gt j \right] = S_j\]
In a ‘fair game’, knowledge of the past will be of no value in predicting future events.
The Quadratic Variation
The quadratic variation of a random walk is defined as \[\sum_{i=1}^n (S_i - S_{i-1})^2\]
The quadratic variation will be an extremely important concept as we extend the random-walk from a discrete time model to a continuous time model (aka to a Brownian Motion or Wiener process).
Intuitively, it makes sense to say that \[(S_i - S_{i-1}) = \pm 1\] Hence \[(S_i - S_{i-1})^2 = 1\] Hence \[\sum_{i=1}^n (S_i - S_{i-1})^2 = n\]
Quadratic variation and transition to continuous time
In the previous setting, each unit of time was 1 unit and bet sizes were \(\pm 1\). We are making 2 changes to this situation.
- Now, we think of a time as a certain amount of time that we call \(t\). That ‘certain amount of time’ is continuous and we can chunk it down into \(n\) pieces. In this context, each piece of time is \(\frac{t}{n}\) long. The goal will be to increase the size of \(n\) until \(n \Rightarrow \infty\)
- the bet size are now \(\sqrt \frac{t}{n}\)
To feel why are we making these changes, let’s revisit the quadratic variation on these new units of time and bet sizes. \[S_{i+1} - S_i = \pm \sqrt{\frac{t}{n}}\] Quadratic variation is now \[\sum_{i=1}^n \left( \pm \sqrt{\frac{t}{n}} \right)^2 = n \cdot \frac{t}{n} = t \tag{3}\]
In this way, the quadratic variation is the total time of the coin tossing. This time is still finite.
In this experiment, both the markov and martingale properties are still valid.
Note that:
- as n increases, the time decreases by a factor of \(\frac{1}{n}\)
- as n increases, the bet size decreases by a factor of \(\sqrt{\frac{1}{n}}\)
- as n increases to \(\infty\), both the time and the bets size will decrease to \(0\) but not in the same way (by the same order)
- as n increases to \(\infty\), the random walk stays finite! The random-walk stays continuous everywhere but differentiable nowhere
- the limiting process as n tends to \(\infty\) and time tends to \(0\) (think \(dt\)) creates a Brownian Motion (the Continuous random variable is called \(B_t\)) or Wiener process (variable called \(W_t\))
Why does it matter that the time-step is \(\frac{t}{n}\) and the bet-size is \(\sqrt{\frac{1}{n}}\)? Any other choice would have not made the random-walk finite (it could have gone to infinity into a finite time or not move at all)
The probability of Wiener Processes or Brownian Motions
Although Brownian motions and Weiner processes are often used interchangeably, there is semantic difference, one could make. Brownian motion is the physical phenomenon of random movement of microscopic particles suspended in a fluid, such as dust in air or pollen in water. Wiener process is a mathematical model of Brownian motion, which is a continuous-time stochastic process that satisfies the following properties. The Weiner process models the standard Brownian motion. They are other type of Brownian motions that are not modeled by the Weiner process like geometric brownian motion, fractional brownian motion and Ornstein-Uhlenbeck process.
We denotes a Wiener Process as \(W_t\) or \(W(t)\) with initial condition that \(W_0 = 0\) or \(W(0) = 0\)
- \(W_0 = 0\)
- \(W_t\) is a continuous path (continuous everywhere, differentiable nowhere)
- independent increments. For \(t_1 < t_2 < t_3 < t_4 < \cdots\), \(W_{t_4} - W_{t_3}\) and \(W_{t_2} - W_{t_1}\) are independent. In other words, the change in the process over any time interval is independent of its past and future values.
- normally distributed increments. \(|W_t - W_s| \sim N(0, |t-s|)\) The variance is equivalent to the length of the interval.
In other words, let’s show that the expectation of the stochastic process \(W_t\) upon the condition that we know everything up to the point \(s\) [\(W_t\) is \(\mathcal{F}_s\) measurable] is \(W_s\)
\[\mathbb{E}_s \left[ W_t | \mathcal{F_s} \right] = \mathbb{E}_s \left[ W_t - W_s + W_S | \mathcal{F_s} \right] = \mathbb{E}_s \left[ W_t -W_s| \mathcal{F_s} \right] + \mathbb{E}_s \left[ W_s | \mathcal{F_s} \right]\] Because (bullet point #4) increments of a Brownian motion are normally distributed with mean 0: \(|W_t - W_s| \sim N(0, |t-s|)\), we can establish that \(\mathbb{E}_s \left[ W_t -W_s| \mathcal{F_s} \right] = 0\)
Hence: \[\mathbb{E}_s \left[ W_t | \mathcal{F_s} \right] = \mathbb{E}_s \left[ W_s | \mathcal{F_s} \right] = W_s\]
Properties of Weiner processes
- They are finite - thanks to the increment (y-axis) scale with the square root of the time-steps
- they are continuous - limit when time-steps are infinitely small and the random-walk is becoming continuous
- follow Markov property
- follow Martingale property
- the quadratic variation from 0 to t is \(t\) itself - see Equation 3
- normality. \(W(t_i) - W(t_{i-1})\) is normally distributed with mean 0 and variance \(t_i - t_{i-1}\)
Expectation of \(W_t\)
Expectation is the same as in the random-walk except this time we are using the limit when \(n \to 0\). \[\mathbb{E}[W_t] = \mathbb{E} \left[\lim_{n \to \infty} \sum_{i=1}^n R_i\right] = \lim_{n \to \infty} \sum_{i=1}^n \mathbb{E}[R_i] = n \cdot 0 = 0 \tag{4}\]
In this sense, the mean of \(W_t = 0\).
Variance of \(W_t\)
\[\mathbb{V}ar[W_t] = \mathbb{E[W_t^2]} - \mathbb{E}[W_t]^2\]
From Equation 4, we already know that \(\mathbb{E}[W_t] = 0\); hence \(\left( \mathbb{E}[W_t] \right)^2\) is also \(0\)
\[\mathbb{E}[W_t^2] = \mathbb{E}\left[\lim_{n \to \infty} \sum_{i=1}^n R_i^2 \right] = \lim_{n \to \infty} \sum_{i=1}^n \mathbb{E}[R_i^2] = \lim_{n \to \infty} n \cdot \left( \sqrt{\frac{t}{n}} \right)^2 = t\]