Monte Carlo Simulation and VAR
I. Background and History
Market risk has evolved over the courses of the last 20 years and moved from a very simplistic approach to a sophisticate one. Back in the early 1980s, risk management was still considered by some to be the backwater of financial services, a place for ex-traders or people who wanted to be traders.
Then came the Black Monday stock market crash of October 19, 1987. In a single day, the Dow Jones Industrial Average lost 22.61% of its value - its largest one-day drop for more than 70 years. Portfolio insurance and program trading were initially blamed for the drop - factors that were little understood by bank senior management. All of a sudden, risk management's stock had risen.
Today, Risk management is no longer just an afterthought, primarily aimed at satisfying the regulators. Rather than being locked away in a cubbyhole somewhere, risk managers are now often situated on or close to the trading floor. They play a key part in the investment process and group strategy as a whole. And weak risk management processes can now negatively affect a firm's share performance. Perhaps most tellingly, risk management is now no longer just a job for ex-traders - it's a real career and attracts the very brightest from the quant community.
Market risks refers to changes in the value of financial instruments or contracts held by a firm due to unpredictable fluctuations in prices of traded assets and commodities as well as fluctuations in interest and exchange rates and other market indices.
Banking supervisors have taken a special interest in codifying risks and in setting standards for their assessment. Their purpose is essentially prudential: to strengthen the soundness and stability of the International banking system whilst preserving fair calculations of risk sensitivities.
II. Scope and Approach
This article will be focused on the definition of the Monte Carlo Approach and its application in the calculation of Value at Risk. The intended audience is the Risk Management IT team of Mizuho International in London.
This article followed a down to earth approach to the understanding of the Monte Carlo VAR, with almost no time spent on the mathematical demonstration of the formulas.
III. Definition of VAR
VAR is an estimate of the Loss from a Cleary defined Portfolio during a time that can be equaled or exceeded with a given probability. If we applied that definition to a mortgage portfolio of an executive working in the city, it’s the estimate drop in price of the property value that could occur during a day or a week with a probability of 1% 0r 5% (corresponding respectively to 99% confidence level or 95 % confidence level).
As the definition clearly stated, VAR is not an exact and uniquely define figure, but estimation based on a key assumption: The model that is supposed to drive the asset in the portfolio. To come back to our executive’s portfolio, the VAR will be estimated based on the process that drive houses price return in London.
The Portfolios under review are fixed for the period in question. For a trading portfolio, this period can reasonably be set between 1 day and 1 month. For example, Basel accord capital adequacy rules stipulate that internal model used in the calculation of VaR should consider a holding period of 10 days (2 weeks). For a credit portfolio (Credit VAR), this period is generally set to 1y.
It’s important to note that VAR does not address the potential losses on the rare occasions when the VAR figure is exceeded. It’s therefore not correct to refer to VAR estimate as the “worst-case-loss”.
The use of VAR involves two arbitrarily chosen parameters – the holding period and the confidence level:
The holding period for Market Risk is usually between 1 day and 1 month.
The choice of the confidence level depends mainly on the purpose to which the risk measures are being put. A very high confidence level, often as great as
99.97%, is appropriate if we are using risk measures to set capital requirements. IV. How is VAR Calculated
There are three main approaches to calculate VAR:
Analytic VAR (or Parametric VAR)
Historical Simulation VAR
Monte Carlo Simulation VAR
IV.1 Analytical VAR
Analytical VAR is based on the assumption that portfolio assets returns are normally distributed. In this case, the VAR is given directly by a formula that is easy to use.
The analytical VAR is a quick (and often dirty) VAR calculation approach, suitable in most cases only for simple and linear portfolio (no complex and non linear products, like convertible bonds, Default Swap, Options, etc…)
To come back to our city executive, assume that the return on its property follow a normal distribution with a mean μ = 0 and a Volatility σ= 0.02, that the portfolio is worth 1 millions £. The VAR on that portfolio for a holding period of 1 day and 95% confidence level is approximated by the following formula:
VAR = Notional * σ * Percentile Distribution
= 1 million * 0.02 * (-1.644845)
= - 32897 £
The exact formula for the analytical VAR is:
VAR = - (Zα* σ + μ)*S
Zα is the lowest percentile of the distribution
σ is the standard deviation of the distribution (volatility)
μ is the mean of the distribution.
IV.2 Historical Simulation VAR
Historical simulation is a very different to VAR estimation. The idea here is that we estimate VAR without making strong assumptions about the distribution of return. We try to let the data speak for themselves as much as possible and use the empirical return distribution (not some assumed theoretical distribution) to estimate our VAR.
This type of approach is based on the underlying assumption that the near future will be like the recent past and that we can reasonably used the data from the past to estimate risks over the near future.
IV.2.1 The basic Method
1.We first construct a hypothetical Profit and Loss series for our current portfolio
over a specific historical period.
a.Let’s apply that to our city executive property portfolio. The first thing to
remember is that the hypothetical Profit and Loss of the properties is the
difference between the buying price and the actual price.
b.So if we assume that the actual price of the property is 1 million £ and that
we have daily prices fluctuations for the past 4 years (250 days * 4 = 1000
days), we can construct a series of 1000 Profit and Loss series for the
property portfolio.
2.Having obtained our hypothetical Profit and Loss data, we can estimate Var by
sorting all the Profit and Loss data in a descending order, and read the appropriate percentile.
a.Let’s apply that to our city executive case. We will then sort the 1000
Profit and Loss data in a descending order and read the appropriate
percentile at 95% confidence level, which will be 1000 * 0.95 data, the
950th data in the sort series. (If I sort from 1 to 1000, and that the 950th
data in the series is -36000 £, then my VAR is -36 000 £.)
This basic approach of historical simulation is based on the simplistic assumption that each day in the past holds the same weight in our P/L distribution. That is clearly not the case in practice. There are sophisticated variant of the historical simulation approach where a different weight is given to P/L data, depending on the observations period (data observed during a property market boom or crash should not hold the same weight in the Profit and Loss distribution; There should be for instance more weights given to more recent data to the one observed 5 years ago).
IV.3 Monte Carlo Simulation VAR
IV.3.1 Introduction to Monte Carlo Methods
Numerical methods that are known as Monte Carlo methods can be described as statistical simulation methods, where statistical simulation is defined in quite general terms to be any method that utilizes sequences of random numbers to perform the simulation. Monte Carlo methods have been used for centuries, but only in the past several decades has the technique gained the status as a valuable method to be used in numerical calculations.
Monte Carlo methods were originally practiced under more generic names such as “statistical sampling”. The “Monte Carlo” designation, popularized by early pioneers in the field (including Stanislaw Marcin Ulam, Enrico Fermi, John Von Neumann and Nicolas Metropolis), is a reference to the famous casino in Monaco. Its use of
randomness and the repetitive nature of the process are analogous to the activities conducted at a casino. Stanislaw Marcin Ulam tells in his autobiography “Adventures of a Mathematician” that the method was named in honor of his uncle, who was a gambler, at the suggestion of Metropolis.
Monte Carlo is now used routinely in many diverse fields, from the simulation of complex physical phenomena such as radiation transport in the earth's atmosphere and the simulation of the esoteric sub nuclear processes in high energy physics experiments to the valuation of financial derivatives.
IV.3.2 Monte Carlo Simulation in practice
Monte Carlo simulation, or probability simulation, is a technique used to understand the impact of risk and uncertainty in financial, project management, cost, and other forecasting models.
IV.3.2.1 Uncertainty in Forecasting Models
When you develop a forecasting model – any model that plans ahead for the future – you make certain assumptions. These might be assumptions about the investment return on a portfolio, the cost of an IT project, or how long it will take to complete a certain task. Because these are projections into the future, the best you can do is estimate the expected value.
You can't know with certainty what the actual value will be, but based on historical data, or expertise the field, or past experience, you can draw an estimate. While this estimate is useful for developing a model, it contains some inherent uncertainty and risk, because it's an estimate of an unknown value.
IV.3.2.2 Estimating Ranges of Values
In some cases, it's possible to estimate a range of values. In an IT project, you might estimate the time it will take to complete a particular task; based on some expert knowledge, you can also estimate the absolute maximum time it might take, in the worst possible case, and the absolute minimum time, in the best possible case. The same could be done for project costs. In a financial market, you might know the distribution of possible values through the mean and standard deviation returns.
By using a range of possible values, instead of a single guess, you can create a more realistic picture what might happen in the future. When a model is based on ranges of estimates, the output of the model will also be a range.
This is different from a normal forecasting model, in which you start with some fixed estimates – say the time it will take to complete each of three parts of a project – and end
up with another value – the total time for the project. If the same model were based on ranges of estimates for each of the three parts of the project, the result would be a range of times it might take to complete the project. When each part has a minimum and maximum estimate, we can use those values to estimate the total minimum and maximum time for the project.
IV.3.2.3 What Monte Carlo Simulation can tell you
When you have a range of values as a result, you are beginning to understand the risk and uncertainty in the model. The key feature of a Monte Carlo simulation is that it can tell you – based on how you create the ranges of estimates – how likely the resulting outcomes are.
IV.3.3 How It technically Works
In a Monte Carlo simulation, a random value is selected for each of the tasks (e.g. duration for an IT project), based on the range of estimates. The model is calculated based on this random value. The result of the model (e.g. total duration of the project) is recorded and the process is repeated. A typical Monte Carlo simulation calculates the model hundreds or thousands of times, each time using different randomly-selected values.
When the simulation is complete, we have a large number of results from the model, each based on random input values. These results are used to describe the likelihood, or probability, of reaching various results in the model.
IV.3.3.1 One Example
For example, consider the following question asked by a bank IT Director; Give an estimation of the total time it will take to complete a Risk IT project. The project is divided in three tasks and each task has to be done one after the other, so the total time for the project will be the sum of the parts. To draw an analogy with Risk Management vocabulary, the risk factors of this model are the tasks duration.
Project Time Estimate
Task1 5 Months
Task2 4 Months
Task3 5 Months
Total 14 Months
This is an example of a basic forecasting model, and currently the approach taken by most people if forecasting project duration. This model gives us a result for the total time: 14 months. But this value is based on three estimates, each of which is an unknown value. It might be a good estimate, but this model can’t tell us anything about risk, about probability. We can’t answer basic question like how likely is it that the project will complete on time.
A more advanced model will be a forecast based on an estimation of the minimum, maximum and most likely expected time to complete the project.
Project Minimum Most Likely Maximum
Task1 4 Months 5 Months 7 Months
Task2 3 Months 4 Months 6 Months
Task3 4 Months 5 Months 6 Months
Total 11 Months 14 Months 19 Months
This model contains a bit more information. However, we still can’t give an answer to the question about the probability to have the project completed on time.
By using a Montecarlo simulation, we will randomly generate values for each of the tasks duration, then calculate the total time to completion. If we run 500 simulations, we will be able to describe some of the characteristics of the risk in the model.
To test the likelihood of a particular result, we count how many times the model returned that result in the simulation. In this case, we want to know how many times the result was less than or equal to a particular number of months.
Time Number of times (Out of 500) Percent of Total
12 Months 1 0%
13 Months 31 6%
14 Months 171 34%
15 Months 394 79%
16 Months 482 96%
17 Months 499 100%
18 Months 500 100%
The original estimation for the “most likely” or expected case was 14 months. From the Montecarlo simulation, we can see that out of 500 trials using random values, the total time was 14 months or less in only 34% of the cases.
Put in another way, there is a probability 0f 34% for the project to be completed in 14 Months or Less. We can also deducted from the results that there is a probability 0f 100% (certainty) for the project to be completed in 17 Month or Less.
IV.3.3.2 How reliable is it?
Like any forecasting model, the simulation will be only as good as the estimates you make. It’s important to remember that the simulation only represents probability and not certainties. Nevertheless, Monte Carlo simulation can be a valuable tool when forecasting an unknown future.
IV.3.4 Monte Carlo Simulation Applied to the Calculation of VAR
The essence of this approach is first to define the problem – specify the random process for the risk factors of the portfolio, the ways in which they affect our portfolio and then simulate a large number of possible outcomes based on these assumptions.
Each simulation trials lead to a possible Profit and Loss (P/L). If we simulate enough trials, we can then produce a simulate density for our P/L (P/L series as for the historical simulation) and we can read off the VAR as the lower percentile of that density (or the Nth row in the P/L data series).
Let’s apply that to our city executive property portfolio.
1.We need to define the random process that drives the evolution of the portfolio
value.
a.Let’s assume that the portfolio return follow a normal distribution (basic
model and easy to understand and simulate).
2.We assume as well that the initial value of the portfolio is 100, 000 GBP and the
want the see the VAR of the portfolio over a 1 year period.
a.Assume a rate of return of 5.4% over the year
Year Opening Balance Return Gross Closing Balance
1 100,000 5.40% 5,400 105,400
P/L 105,400.00
DIFF
100,000.00
5,400.00
The Expected P/L after 1 year is 5,400.
3.Now let’s run a Monte Carlo Simulation. The risk factor in our model is the rate
of return of our portfolio.
a.Assuming that the risk factor follow a normal distribution with a mean of
5.4% and standard deviation of 7.3 %
b.We can generate random rate of return using the NORMDIST function of
Excel
c.Based on a 100 simulation trials, here is the output generate
d.
Simulation
Trials Opening Balance Return Gross Closing Balance P/L
1 100,000 4.53% 4,529 104,529 4,529
2 100,000 -1.67% (1,666) 98,334 (1,666)
3 100,000 2.74% 2,74
4 102,744 2,744
4 100,000 -12.79% (12,787) 87,213 (12,787)
5 100,000 0.37% 374 100,374 374
6 100,000 10.49% 10,494 110,494 10,494
7 100,000 -6.44% (6,441) 93,559 (6,441)
8 100,000 -5.81% (5,806) 94,194 (5,806)
---- 100,000 1.07% 1,070 101,070 1,070
95 100,000 5.83% 5,829 105,829 5,829
96 100,000 9.95% 9,951 109,951 9,951
97 100,000 2.24% 2,238 102,238 2,238
98 100,000 6.09% 6,086 106,086 6,086
99 100,000 12.37% 12,369 112,369 12,369
100 100,000 11.14% 11,142 111,142 11,142
4.Looking at a VAR at 95% confidence level
a.It means we are looking to the Losses that the portfolio can have with a
probability of 5%.
b.To get that figure, we will sort the P/L in a descending order and read the
100*95% P/L value, which will the 95th P/L value in the descending order. Simulation
Trial Opening Balance Return Gross Closing Balance P/L
78 100,000 23.82% 23,824 123,824 23,824
40 100,000 22.42% 22,421 122,421 22,421
24 100,000 21.29% 21,294 121,294 21,294
68 100,000 20.93% 20,935 120,935 20,935
96 100,000 20.20% 20,198 120,198 20,198
17 100,000 20.02% 20,017 120,017 20,017
50 100,000 19.61% 19,605 119,605 19,605
44 100,000 19.17% 19,167 119,167 19,167
41 100,000 18.74% 18,735 118,735 18,735
64 100,000 18.56% 18,564 118,564 18,564
95 100,000 17.98% 17,984 117,984 17,984
10 100,000 17.47% 17,470 117,470 17,470
100 100,000 16.92% 16,922 116,922 16,922 ------ 100,000 3.15% 3,146 103,146 3,146
28 100,000 2.71% 2,715 102,715 2,715
85 100,000 2.62% 2,621 102,621 2,621
36 100,000 2.29% 2,289 102,289 2,289
52 100,000 1.99% 1,995 101,995 1,995
55 100,000 1.93% 1,931 101,931 1,931
3 100,000 1.43% 1,433 101,433 1,433
63 100,000 0.91% 908 100,908 908
27 100,000 0.68% 680 100,680 680
74 100,000 0.11% 109 100,109 109
98 100,000 -1.00% (1,003) 98,997 (1,003)
35 100,000 -1.40% (1,397) 98,603 (1,397)
25 100,000 -1.51% (1,510) 98,490 (1,510)
86 100,000 -1.75% (1,749) 98,251 (1,749)
70 100,000 -1.95% (1,954) 98,046 (1,954)
5 100,000 -2.19% (2,186) 97,814 (2,186)
22 100,000 -2.27% (2,274) 97,726 (2,274)
26 100,000 -3.55% (3,550) 96,450 (3,550)
49 100,000 -3.57% (3,570) 96,430 (3,570)
69 100,000 -3.90% (3,901) 96,099 (3,901)
72 100,000 -4.08% (4,082) 95,918 (4,082)
92 100,000 -4.45% (4,455) 95,545 (4,455)
51 100,000 -4.81% (4,810) 95,190 (4,810)
75 100,000 -5.78% (5,783) 94,217 (5,783)
65 100,000 -5.92% (5,922) 94,078 (5,922)
1 100,000 -7.35% (7,352) 92,648 (7,352)
7 100,000 -9.20% (9,199) 90,801 (9,199)
42 100,000 -15.05% (15,049) 84,951 (15,049) The VAR of that portfolio is -4,810 GBP. It’s the 95th P/L in the descending order after
100 simulations at 95% confidence level.
IV.3.5 Monte Carlo Simulation VAR at Mizuho International
This part covers in a bit more technical details the step required to calculate Monte Carlo
VAR for a global financial portfolio, with an emphasis on the role of the different components of the VAR calculation (VCV, Scenario…).
IV.3.5.1 Model Definition
The first step is to specify the random process (forecast model) that the risk factors in Riskwatch will follow. There are three main families of risk factors in Riskwatch; interest rates, foreign exchange rates and stock indices.
The goal of this step is to ensure that the forecast model is a realistic approximation of the reality, namely the future evolution of the risk factors in a day, a week or a month.
The model selected by Riskwatch (MHI Implementation) is governed by the stochastic differential equation:
?Log(Ri(t)) = ?Wi(t)
The numerical representation of this model (the one that is directly implement numerically) is:
Log(Ri(t+?t)) – log(Ri(t)) = ξi* √?t
This equation represents the evolution of the Risk Factor Ri over the period ?t.
ξi is a Vector of random number independently distributed, following a Normal distribution with a mean 0 and a Variance Q.
That is often represented by ξi~ N(0, Q).
Q is the Covariance Matrix of Risk factors returns (it’s more precisely the covariance matrix of logarithm return of risk factors.).
Qij = σiσjρij
σi: Standard Deviation Risk factor Ri
σj: Standard Deviation Risk Factor Rj
ρij: Correlation Factor between Risk Factor Ri and Rj
IV.3.5.2 Important
Standard deviation is a statistical term that provides a good indication of volatility. It measures how widely values (closing prices for instance) are dispersed from the average. Dispersion is the difference between the actual value (closing price) and the average value (mean closing price). The larger the difference between the closing prices and the average price, the higher the standard deviation will be and the higher the volatility. The closer the closing prices are to the average price, the lower the standard deviation and the lower the volatility.
The volatility is an annualized standard deviation, or the standard deviation for daily return multiply by √252, assuming 252 open business day in a year. It is equally the standard deviation for monthly return multiply by √12, for 12 months in a year.
In Riskwatch, Risk Factor returns are Log-Normally distributed, meaning that the return of a risk factor can’t be negative. A risk factor can be Normally distributed (the value of an interest rate can be negative).
The major difficulty in the implementation of Montecarlo Simulation is to generate the second part of the following equation: Log(Ri(t+?t)) – log(Ri(t)) = ξi* √?t
IV.3.5.3 How do we generate ξi* √?t
ξi is a vector of independent random number Normally distributed.
a. Generation of Random Number uniformly distributed
To simulate that random vector, we use a uniform random number generator, like the excel RAND() function. Riskwatch uses the C library function erand48. This function generates pseudo-random numbers using a linear congruential algorithm and 48-bit integer arithmetic. It returns non-negative double precision floating point values uniformly distributed over the interval [0.0, 1.0]
b. Transformation of the uniformly distributed random number to normally distributed random numbers
This step is performed because we are trying to generate normally distributed random numbers, as required by the Montecarlo Simulation formula.
The general approach to generating random numbers x from an arbitrary continuous distribution with the distribution function F(x) is to generate y uniformly in [0, 1] (as we did in the step a) and then to solve y = F(x). x is obtained by solving the inverse equation.
However, in the specific case of generating normal random number (and not any arbitrary distribution), the Box-Muller algorithm is used.
The Box-Muller method maps two independent uniform variates, U and V (as generated in a) to two independent standard normal variates, X, Y by the transformation (X,Y) = R(Cosθ, Sinθ)
where
R = √ (-2lnU) and θ= 2π.
Riskwatch uses an alternative algorithm, the Marsalia polar method to generate standard normal variate. This algorithm is a modification of the Box-Muller method, which avoid the calculation of sinus or cosinus. For more details, you can refer to the Riskwatch documentation.
IV.3.5.4 Generation of Correlated Random Variables
In the previous section, we transform random variables uniformly distributed into Random variables normally distributed. In most cases, the random variables are correlated (for instance there is a correlation between a 3M Libor rates and a 6 Month Libor rate). In order to include those correlations in the VAR calculation, the independent normal variables generated in the previous section are transformed into correlated random variables.
A Technique used in that instance is the Cholesky Decomposition. This is a matrix algebra transformation that is quite easy to comprehend.
Assume we have a Variance-Covariance Matrix of the risk factors. We want to transform Independent random variables (representing the risk factors) into correlated random variables. The following step described the technique:
V = LU
V is the variance-co-variance Matrix
L is the upper side of the matrix decomposition
U is the lower side of the matrix decomposition
Once L is identified, the matrix of independent random variables is multiplied by L to obtain a matrix of correlated random variables. This matrix represents ξi
We can then run the simulation, generate 1000 scenarios values of the risk factors, revalue the position with this factor and then calculate the P/L. The VAR is then read as the x% quantile of the distribution; by sorting is a descending order the P/L results and reading the corresponding n% quantile value.
Riskwatch uses a variant of the Cholesky decomposition in the calculation of VAR, that variant is called the Single Value Decomposition (SVD).
Auguste Nguetsop, Msc, MBA, PRM is a Managing Director at RiskWave, a quantitative Risk Management consultancy firm based in London. He can be contacted at Auguste.Nguetsop@https://www.wendangku.net/doc/8a10643467.html, or
Auguste.Nguetsop@https://www.wendangku.net/doc/8a10643467.html,
References
Risk Analytics in Riskwatch, Version 4.0.2, Algorithmics Riskwatch Documentation Value At Risk, The new benchmark of managing financial risks, Second Edition, Philippe Jorion
Monte Carlo Methods in financial Engineering, Paul Glasserman, Springer
https://www.wendangku.net/doc/8a10643467.html,
https://www.wendangku.net/doc/8a10643467.html,
https://www.wendangku.net/doc/8a10643467.html,
https://www.wendangku.net/doc/8a10643467.html,
https://www.wendangku.net/doc/8a10643467.html,/
https://www.wendangku.net/doc/8a10643467.html,