Probability For Quant 01

Probability Distribution
Author

Francois de Ryckel

Published

September 30, 2022

Modified

February 20, 2024

I am storing here a few nuggets of probability I encountered in my quantitative finance and machine learning journey. I have a similar page on Algebra. This first series is about discrete probability problems.

Expectation with the Poisson distribution

Example 1 - Find \(\mathbb E[X]\)

Let’s \(X\) be a Discrete Random Variable (DRV) such that \(X \sim Po(\lambda)\). Then \(P(X = x) = \frac{e^{-\lambda} \lambda^x}{x!}\). Let’s calculate the expectation of X.

\[E[X]=\sum_{x=1}^{\infty} x P(X=x)=\sum_{x=1}^{\infty} x \frac{e^{-\lambda} \lambda^x}{x!} = e^{-\lambda} \sum_{x=1}^{\infty} \frac{\lambda^x}{(x-1)!}\]

\[E[X]= e^{-\lambda} \lambda \sum_{x=1}^{\infty} \frac{\lambda^{x-1}}{(x-1)!} = e^{-\lambda} \lambda \sum_{x=0}^{\infty} \frac{\lambda^{x}}{(x)!}= e^{-\lambda} \lambda e^{\lambda} = \lambda\]

The second to last step is just the McLaurin expansion of \(e^x\).

Example 2 - Find \(\mathbb E[x!]\)

\(X \sim Po(\lambda)\), what is \(\mathbb E[X!]\)

\[E[X!]=\sum_{x=0}^{\infty} x! \cdot P(X=x) = \sum_{x=0}^{\infty} x! \frac{e^{-\lambda} \lambda^x}{x!} = e^{-\lambda} \sum_{x=0}^{\infty} \lambda^x = e^{-\lambda} \frac{1}{1-\lambda}\] And this only exist if \(0 < \lambda < 1\).

On the last step, we just use the infinite series formula \(S_{\infty} = \frac{u_1}{1-r}\); and in this case \(u_1 = \lambda^0 = 1\)

Covariance & correlation

Assuming \(X\) and \(Y\) are random variables with mean \(\mu_x\) and \(\mu_y\), by definition, we have \[Cov(X,Y) = \mathbb{E}[(X-\mu_x)(Y-\mu_y)] \tag{1}\]

With some manipulations: \[Cov(X,Y) = \mathbb{E}[XY - X \mu_y - Y \mu_x + \mu_x \mu_y] = \mathbb{E}[XY] - \mathbb{E}[X \mu_y] - \mathbb{E}[Y \mu_x] + \mathbb{E}[\mu_x \mu_y]\]

\(\mu_x\) and \(\mu_y\) are constant. So we can re-write above equation as

\[Cov(X,Y) = \mathbb{E}[XY] - \mu_y \mathbb{E}[X] - \mu_x\mathbb{E}[Y] + \mathbb{E}[\mu_x \mu_y]\]

Also, \(\mathbb{E}[\mu_x \mu_y] = \mu_x \mu_y\) because the expectation of a constant is the constant itself. And \(\mathbb{E}[X] = \mu_x\) and \(\mathbb{E}[Y] = \mu_y\). So let’s re-write

\[Cov(X,Y) = \mathbb{E}[XY] - \mu_y \mu_x - \mu_x \mu_y + \mu_x \mu_y\] \[Cov(X,Y) = \mathbb{E}[XY] - \mu_y \mu_x \tag{2}\]