which should generally be performed almost on a case-by-case basis
and cover at least three distinct areas: (a) the choice, testing and safekeeping
of the mathematics and computer code that form the model;
(b) the choice of inputs and calibration of models to market data; and
(c) the management issues associated with these activities. The success
of model risk management and control will often depend crucially on
personal judgement and experience. Therefore, the following set of rules
should not be considered as a series of recipes, but rather as a minimum
checklist.
Rule 1: Define what should be a good model
Before qualifying a model as being better than another model, one needs to
define precisely the model-quality metrics. What makes a model superior to
another often varies according to its usage. Consider, for instance, a pricing
model versus a risk management model. The former must match closely
the values of known liquid securities and focuses on absolute prices today,
while the latter needs to realistically represent the possible future evolution
of market variables and focuses on relative variations tomorrow. Both goals
are sufficiently different to result in divergences in the ranking of competing
models, depending on the context. Therefore, a good pricing model will not
necessarily be a good risk-management model, and vice-versa. Nor will
the hypothesis or conclusions reached for the pricing model be necessarily
applicable to the hedging model.
In addition to the model usage, what makes a model superior to another
also varies according to the preferences of the model user and the necessary
underlying assumptions. Unfortunately, practitioners often understate or
neglect this aspect. For instance, it is common practice to test an option pricing
model’s quality by comparing the model’s predicted prices with market
prices, and using some sort of loss function such as the mean pricing error
(with respect to the market), the mean absolute pricing error or the mean
squared pricing error. The approach implicitly assumes that (i) model users
display symmetric preferences (for example, equally consider under- and
over-pricing, or care equally about losses and gains); (ii) the option pricing
model is correct; and (iii) the market is efficient in its pricing process.
If the option pricing model is rejected, we do not know if (ii) is untrue;
(iii) is untrue; or (ii) and (iii) are untrue! And even if the model has been validated,
the validation process may not hold true for an individual displaying
asymmetric preferences (for example, downside risk aversion). It is therefore
crucial to agree on the model-quality metrics prior to ranking any type of model.
Rule 2: Keep track of the models in use
Fighting model risk should start with a detailed inventory of the various
models available within a financial institution. This means keeping records
of which models are used, who uses them, and how they’re used. For
computer-based models, it also implies keeping track of who built them,
who keeps the code and who is allowed to change it. It is not uncommon to
see banks where the only version of the source code is stored on a magnetic
tape somewhere in the archives.
An up-to-date inventory should significantly enhance productivity and
reliability. Typically, in a financial institution, each time a new product is
developed, there is a tendency to either quickly adapt an older model with
or without authorized modifications (the spreadsheet syndrome), or rebuild
a new model from scratch (the blank sheet syndrome). Neither of these
approaches is efficient and both are potential model risk generators. By
storing models in a common library, by documenting them and knowing
their users, a significant amount of time and money can be saved and model
risk is significantly reduced.
An up to date inventory will also help in understanding and explaining
internal divergences. For instance, traders, risk managers and back-offices
frequently use different models. This leads to an internal control problem
and opens the door to conflicts regarding unexplained profit-and-loss differences.
Although it would be preferable that all of them rely on the same
approved model, this is rather wishful thinking, because their needs are fundamentally
different. Knowing who uses what can significantly help solve
such internal control problems.
Rule 3: Define a model-testing framework
This may appear as a tautology, but each financial institution should establish
a complete and rigorous model-testing framework. Too often, model
testing is limited to proofing some mathematical formulations and entering
a few parameters in a spreadsheet to observe the model’s output.
This is clearly insufficient. Data mining techniques make it easy to obtain
statistical proofs of nearly any relationship by selecting an appropriate
historical dataset. Therefore, a rigorous model-testing framework should include:
A dedicated model validation team, which should be independent of both
the models’ developers and final users to ensure impartiality and eliminate
the operational risk embedded in the implementation of a model.
Independent assessment is the only way to provide a welcome degree
of comfort, useful suggestions and improvements, and avoid the set of
incentives to realize profits early.
A precise framework to guide all persons involved in models validation.
This should include a standardized series of test procedures and data sets,
as well as minimum precise requirements to qualify a model as acceptable
(the model risk metric). These should not be considered as exhaustive,
but rather as minima. For instance, whatever the option pricing model,
a deep in the money call option should behave like a forward, while a
deep out of the money call should behave like a zero-coupon bond.
A clear formalization of internal responsibilities for validation. As a rule,
if somebody is supposed to do it, nobody will do it.
It is important to realize that the role of model testing should not be
reduced to validate or invalidate a model, but should also include increasing
its reliability, revealing its weaknesses, confirming its strengths and promoting
improvements. Consequently, it is essential that (1) any information
generated during the test phase be recorded and documented; and (2) purchasing
a model from an external vendor does not exempt it from the validation process.
Rule 4: Regularly challenge and revise your models
At the root of the model risk problem is that market and mathematical
assumptions (for example, simplifications of market behavior) are often
hard-coded and remain stagnant within the model, while things do change
in real life. Consequently, models should not be carved in stone, but rather
evolve and improve with time. All models used within an institution should
be regularly revised and their adequacy to the current market conditions
challenged. This process should include an analysis of the underlying
assumptions as well as a consistency check with the best-accepted practices
in the industry. In addition to this regular revision, institutions extending
existing businesses or entering new ones should also make a special effort
to reassess existing models, procedures, data and best practices before they
adopt. Very naturally, model users should be involved in the process as they
are likely to be aware of the latest developments in the field.
Rule 5: Mark to market or to market standards, not to a model
Following the Group of Thirty’s (G30) recommendations, the calculation
of the mark-to-market value7 of derivative positions is widely practiced in
the financial industry as a natural way to avoid model risk. Unfortunately,
marking to market has its own dangers and may induce a false sentiment of
security and overconfidence.
For positions where there is a conventional market price (for example,
closing bid), one would expect the results of a good model to be quite close
to those observable on the market. Appreciable differences should be seen as
an early warning system, so that one needs to fully understand the sources
of these differences to form an opinion of the model being tested. As an
illustration, the 1997 disaster at NatWest could have been easily avoided by
obtaining external implied volatility quotes from brokers or other institutions
that trade in the marketplace and by comparing them with Natwest’s values.
For more complex or illiquid derivative instruments, marking to market
becomes a difficult exercise. When prices are not easily available, traders
tend to use theoretical prices as a benchmark, generating an important
model risk source (the mark to model syndrome). If the benchmark model is
wrong, everything can go wrong. Institutions following this direction will be
particularly at risk if its traders (relying on the wrong model) are themselves
the unique providers of a given financial instrument on the market. Then,
the market prices coincide with the incorrect model prices which means that
large neutral positions could in fact generate important accumulated losses
when the situation is discovered.
Although not ideal, marking to model may be acceptable if all market
participants agree on a standard. However, there are fields with no consensus
on a particular model. Consider for instance fixed income securities and
interest rate modeling. Since the valuation of most assets relies on discounting
cash flows, interest rate modeling is a very important area of finance.
However, no definitive interest rate model has yet emerged.8 This is good
news for those who wish to carry out research in this line, but it is also a
source of concern to investment banks and their regulators, as a mark to
model gain or loss is clearly meaningless.
Rule 6: Simple is beautiful
The development of modern financial theory has come to a stage in which
finance produces a rich source of challenging questions for a range of
mathematical disciplines, including the theory of stochastic processes and
stochastic differential equations, numerical analysis, the theory of optimization,
and statistics. Theoretical results and computational tools are used,
for instance, in the pricing of financial derivatives, for the development of
hedging strategies associated with these derivatives, and for the assessment
of risk in portfolios. Unfortunately, as the mathematics of finance reaches
higher levels, the level of common sense seems to drop. Rather than starting
with some idea, some concrete economic or physical or financial mechanism,
and then expressing it in mathematics, researchers increasingly just
write down an equation and try to solve it without any consideration of
the usefulness of the overall process or its applicability to the real world.
We believe this approach is clearly wrong: models should be based on concepts,
information and insight, not just on advanced mathematics. Although
mathematics is important to modeling, it should not be primary, but mostly
complementary. Most financial models users will be fast-thinking actors
in dynamic markets. Therefore, avoiding unnecessarily complicated models
should be the rule. Whenever available, simple, intuitive and realistic
models should always be preferred to complex ones.
For the same reason, model-users should only move to a more complex
model or approach only when there’s a value in doing so. In a sense, the
science of modeling should be seen as an evolutionary process, a sort of
chicken-or-egg problem. Better models should in turn allow for a better
understanding of risks, the creation of new financial products, and, therefore,
the need for additional models. As an illustration, the elegance of
the Black and Scholes model is its rationality and logic. The model was
not successful because prices of financial assets were actually log-normally
distributed (which they may or may not be), but because the formula was
easy to apply and understand, it arose as a valid first order approximation in
a much wider class of models. The later Black and Scholes stochastic extensions
(for example, with stochastic interest rates and/or stochastic volatility)
were never as successful as the original model because they lost most of the
qualities of their ancestor. As a rule, users should always understand the
ideas behind a model and be comfortable with the model results. Treating a
model as a black box is definitely the wrong approach.
Rule 7: Verify your data
Afew years ago, the lack of reliable financial data was a major problem. It is
still the case in a few areas (for example, the modeling of exotic derivatives
or of credit risk). However, most of the time, we are rather awash with data.
The key is turning this data into knowledge. Information should no longer
be represented by data, but by data verified and organized in a meaningful
way. The quality of a model’s results depends heavily on the quality of
its data feed. Garbage in, garbage out (GIGO) is the law, and data which
is faulty to start with is likely to produce faulty conclusions after processing,
and further, may ruin the benefit of sophisticated analytical models.
Ensuring the integrity and accuracy of data feeds in models should therefore
be key, even though it may require considerable effort and time. This
implies checking both the series of data against errors, but also the semantics
of the feed. Should the fair value be the price at which the firm could
incrementally unwind the position, or the price at which they could sell
the entire book, or the price above which they start to lose clients’ interest?
These questions need to be addressed at the beginning of the modeling process.
As an illustration, in the 1970s, Merrill Lynch had to book a US$70 million
loss because it underpriced the interest component and overpriced the
principal component of a 30-year strip issue.9 The market identified the
mis-pricing and only purchased the interest component. The problem was
simply that the par-yield curve Merrill used to price both components was
different from the annuity and the zero-yield curves that should have been
used for each component. Oops! Wrong feed …As a rule, one should also
beware of multiple data sources and non-synchronous data feeds (for example,
stock indices and foreign exchange for daily close values) should also be
reduced to a minimum, as they can lead to wrong pricing or create artificial
arbitrage opportunities.
Rule 8: Use a model for what it is made for
Most models were initially created for a specific purpose. Things may start
breaking down when a model is used outside its range of usefulness, or is
not appropriate for the intended purpose. For instance, a good model for
value at risk (VaR) will not necessarily be a good pricing model. The reason
is that VaR estimates focus only on price variations, but not on price levels.
Pricing errors are therefore not translated in the VaR. For the same reasons, a
good pricing model is not necessarily a good hedging model, and vice versa.
For example, using a stochastic or a deterministic volatility does not make
a huge difference as far as the pricing is concerned if one gets the average
volatility right. It makes a big difference as far as hedging is concerned.
Rule 9: Stress test your models
The G30 states that dealers should regularly perform simulations to determine
how their portfolio would perform under stress conditions. This is
often implemented through scenario analysis, which is appealing for its
simplicity and wide applicability. Unfortunately, most institutions tend to
focus solely on extreme market events such as the October 1987 crash. They
neglect to test the impact of violations of the model hypothesis, and how
sensitive the model’s answers are to its assumptions. Asmall change in one
parameter may result in dramatic changes in the model output, while a large
change in another parameter may not necessarily change things at all.
Because there is no standard way of carrying out stress model risk testing
and no standard set of scenarios to be considered, the danger is that one does
not really suspect a model until something dramatic happens. To borrow a
metaphor from a well-known movie, the threat of a North Atlantic iceberg
was just a theory on 14 April 1912 – until the Titanic hit one. This is why the
process should also depend on the qualitative judgement and experience of
the model builder.
Rule 10: Beware of exotics!
By definition, exotic derivatives are highly subject to model risk. Firstly,
exotic derivatives are not traded on liquid markets, but over the counter.
Prices are therefore not the result of the equilibrium between supply and
demand with numerous arbitrageurs waiting to capture any mispricing,
but are rather supply driven. Secondly, exotic derivatives are often sensitive
to some exotic parameters that cannot be hedged, are embedded into
the model assumption, or are themselves linked to the difficulty of managing
the risk. For instance, yield curve options pose vega spread volatility
issues; Bermudian options create modeling problems due to their hybrid
nature between American and European; and ratchet options pose difficulties
associated with the existence of a volatility smile slope. None of these
variables are directly hedgeable. And finally, models may produce similar
plain vanilla option prices (and therefore fit to the market data), yet give
markedly different prices of exotic options. This is documented for instance
in Hirsa, Courtadon and Madan (2002).
Rule 11: Beware of correlations!
Correlations are found almost everywhere in finance, from portfolio construction
to option pricing and hedging. As soon as there is more than
one random parameter to be considered, correlations have a role to play.
Unfortunately, correlations are among the most unstable parameters in real
life, particularly during periods of heightened volatility. Risk managers
often consider the possible effects of high return volatilities, but fail to
account for the higher correlations between asset returns that would generally
accompany the elevated volatility. One way to do so would be to employ
information from historical periods of high volatility in order to form estimates
of correlations conditional to a period of heightened volatility. These
conditional correlations could then be used to evaluate the distribution of
returns under a high volatility scenario. Put differently, the method used for
stress testing a portfolio must not exclude the empirical feature that periods
of high volatility are also likely to be periods of elevated correlation.
Read More: ELEVEN RULES FOR MANAGING MODEL RISK