Risk modelling consulting: Monte Carlo speed optimizations

Anyone who has worked in financial risk modelling can testify to the fact that Monte Carlo simulations can take a long time to run – sometimes an entire day or longer! The problem this causes is that model testing – particularly sensitivity testing around the inputs, parameters and methodologies – ends up taking weeks and months. The loss in productivity (and the sanity of the modeler) is huge.

This is a really simple technique that can reduce run time by as much as half.

Often, risk modelers are interested in simulating from an aggregate distribution that is the sum of multiple other distributions. Usually it is the very high quantiles of this aggregate distribution that they want to estimate, such as the 99.9% quantile or 1 in 1000 event.  They would therefore like to generate a lot of losses from the tail of the distribution, to get sufficient resolution for estimating the high quantiles. Unfortunately, by definition most of the losses generated during a Monte Carlo simulation will not fall in the tail of the constituent distributions, and 50% will even fall below the median. for heavy-tailed distributions, these small losses usually don’t contribute much to the large losses in the aggregate distribution. This means that a tremendous amount of computational power is used to generate high density losses in the body of the distribution, just to get a handful of losses in the tail.

The solution: re-use those body losses! Put a cap on the number of body losses you wish to generate, say 10,000 losses below the 90% quantile. Before beginning the Monte Carlo simulation, generate these 10,000 losses and store them in a vector. Now, whenever the simulation calls for a loss below the 90% quantile, simply locate the nearest corresponding loss in this vector. In other words, if the random number generates the number 0.44, we would calculate 0.44*10,000 and round to the nearest whole number and use the loss with the corresponding index. This procedure is much faster than the calculations required to invert a lognormal cdf. However, losses above the 90% quantile should continue to be uniquely generated to preserve accuracy in the tail.

While there are more sophisticated approaches to improving Monte Carlo performance, a simple approach like this is quick to implement and less prone to errors.

Another tip that’s handy to know is that with certain distributions, there may be a shortcut that doesn’t require you to generate all the losses again when you change a parameter. For a lognormal distribution, for example, the losses generated are proportional to \(\exp(\mu)\). This means that if wish to change the \(\mu\) parameter from \(\mu_1\) to \(\mu_2\), all you have to do is scale all the losses by \(\exp(\mu_2-\mu_1)\)! This can save a tremendous amount of time when conducting sensitivity testing.

Does your bank have Monte Carlo models which take all day to run? Why not drop us an email to find out how our financial modelling consulting services can work for you.