The Infinite Monkey Cage podcast recently covered risk, in the general “risk of dying / epidemic / explosion” sense, as opposed to the financial / probabilistic sense that most readers work with. As usual with TIMC the discussion was pretty broad. The force of mortality got a look in, there was some chat about career risk, and they spent a while talking about the concept of a “micromort”. A micromort is a 1 in 1,000,000 chance of dying. For example, travelling 6000 miles by train is one micromort, whereas travelling the same distance by car is 26 micromorts. It's a useful way to communicate relative risks.
In finance of course we have a similar way of communicating risk - we use a monetary amount of VaR. But is USD 100,000,000 VaR of credit risk really equivalent to USD 100,000,000 VaR of equity risk? That, of course, depends on how you estimate your 1 in 200 year (or other relevant) percentile for each risk factor.
Also mentioned in the podcast was the issue of how people's perception of risk is often radically different to rational, statistical estimates. This topic was very well covered by Dan Gardner in Risk: The Science and Politics of Fear a couple of years ago - I highly recommend giving it a read.
Now the problem with this divergence between actual and perceived risk is that politicians are affected as much as (or more than) the average Joe. So decisions on, for example, the use of nuclear vs. coal generated power are based not on an assessment of the actual relative risks of the options, but on the perception of those risks. Which can lead to irrational, or just flat-out wrong decisions being made.
This leads us back to financial risk management - if you are making a decision on the mix of risks your firm is assuming, are you basing that on a rational, statistical estimate of the relative risks, or on a biased perception of those risks? The immediate answer is that it's by definition a statistical estimate, the 99.5% VaR. Of course.
The more honest answer is that the 99.5th percentile reflects the biases and beliefs of those who set the parameters of the models used to estimate the VaR's. The fact is that there is nowhere near enough data to estimate 1 in 200 year events in any financial risk factor. Because of the paucity of data, even unbiased, rational observers can have material disagreements on the estimates of extreme percentiles. Check out the Extreme Events Working Party paper for a lot more on this.
Beyond honourable differences as described in the paper above, we enter into the murky world of model calibration as it is done in reality. A world in which, if the number coming out of the model is too big, then one changes the numbers going in to the model until the result comes out where it “should be”. My particular favourite in this area is credit risk.
Credit is a market with an extensive recorded history. I recommend reading “Corporate Bond Default Risk: A 150-Year Perspective” for a proper long-term view. The most extreme event in the history of this market was the railroads crisis of the 1840's. Over two years during this period one third of the bonds in the US market defaulted. That is actual, permanent loss of capital, not just a temporary spread widening. On the other hand I have heard managers arguing that the spread movement on a particular portfolio of (say) 400 bps in 2008 was anomalous, and a “fair” estimate of a 1 in 200 year event should be a 200 bps widening. No actual defaults of course.
Now the issue here is not that the number “should be” 400 bps, or a certain number of defaults. The issue is that if you do not have a consistent and accurate estimate of relative risks, then you will end up making incorrect, irrational or sub-optimal decisions. By all means, calibrate to a lower percentile for political, business or expedient reasons, but make sure you are doing it consistently across risk factors.
This train of thought then made me wonder - to what extent do the numbers generated by the risk / reporting function actually inform decisions in the average financial firm? From experience I know that on the trading floor managers typically want to expand whichever business made the most money last year - regardless of the risks that business was running. In life companies the risk numbers (ICA / Solvency 2) will be generated by the actuarial reporting function, which tends to view the firm's business mix as a static item, and the risk reporting process as a passive act of measurement. This is not to say that senior managers necessarily have the same bias, but it does seem to be a cultural prejudice of life actuaries. The fact that life contracts are very long term is obviously pertinent.
On the other hand I was recently involved in some interesting conversations about building a pricing system for a reinsurer. Best practice in the reinsurance business links pricing and risk measurement / management so closely as to be the same thing. A potential contract is assessed in terms of the amount of risk it adds to the book, compared to the amount of revenue it brings in. I recommend a paper by Mango which proposes measuring the risk of potential contracts by the amount they covary with the existing book, similar to Cochrane's general framework set out in Asset Pricing.
The key concept here is the feedback loop between the risk measurement function, the pricing function, and the overall management of the business. Indeed, the Solvency 2 Use Test in action!
In other news, RStudio v0.98 was released a couple of weeks ago. Readers will recall I'm a big fan, although I haven't actually had a chance to play with any of the new features yet.