The Lognormal at high variance: a probability paradox

You have zero probability of making money. But it is a great trade.

One-tailed distributions entangle scale and skewness. When you increase the scale, their asymmetry pushes the mass to the right rather than bulge it in the middle. They also illustrate the difference between probability and expectation as well as the difference between various modes of convergence.

Consider a lognormal \mathcal{LN} with the following parametrization, \mathcal{LN}\left[\mu t-\frac{\sigma ^2 t}{2},\sigma \sqrt{t}\right] corresponding to the CDF F(K)=\frac{1}{2} \text{erfc}\left(\frac{-\log (K)+\mu t-\frac{\sigma ^2 t}{2}}{\sqrt{2} \sigma \sqrt{t}}\right) .

The mean m= e^{\mu t}, does not include the parameter \sigma thanks to the -\frac{\sigma ^2}{2} t adjustment in the first parameter. But the standard deviation does, as STD=e^{\mu t} \sqrt{e^{\sigma ^2 t}-1}.

When \sigma goes to \infty, the probability of exceeding any positive K goes to 0 while the expectation remains invariant. It is because it masses like a Dirac stick at 0 with an infinitesimal mass at infinity which gives it a constant expectation. For the lognormal belongs to the log-location-scale family.

\underset{\sigma \to \infty }{\text{lim}} F(K)= 1

Option traders experience an even worse paradox, see my Dynamic Hedging. As the volatility increases, the delta of the call goes to 1 while the probability of exceeding the strike, any strike, goes to 0.

More generally, a \mathcal{LN}[a,b] has for mean, STD, and CDF e^{a+\frac{b^2}{2}},\sqrt{e^{b^2}-1} e^{a+\frac{b^2}{2}},\frac{1}{2} \text{erfc}\left(\frac{a-\log (K)}{\sqrt{2} b}\right) respectively. We can find a parametrization producing weird behavior in time as t \to \infty.

Thanks: Micah Warren who presented a similar paradox on Twitter.

The Lognormal at high variance: a probability paradox