- The “measurement is good” fallacy
- The “measurement is control” fallacy
- Flaky mathematics and the data problem
- Blinded by science and confirmation bias
IMHO operational risk is a poor candidate for quantitative risk modeling, and applying the techniques of the field is a waste of resources and will lead to a false sense of security. I’ll explain my objections in a bit more detail in the next paragraphs, and conclude by answering the questions: does it really matter, and, why are you such a grumpy guy?
Measurement is Good
The “measurement is good” fallacy is the notion that all things can and should be measured, that all measurements are useful and that measurement is important for its own sake.
This fallacy is rife in fields on the borderline of proper science, psychology being a prime example. Consider IQ testing, personality testing, etc. We start with a vague concept (“intelligence”), “decide” (there is no other word) how we are going to measure it (the IQ test), and apply the measurement arbitrarily (test every child in the fifth year of school). Having obtained the measurements they can be safely filed away, never to be used again. The entire exercise is utterly pointless, but pretty harmless apart from wasted time and effort. Unless someone actually tries to use the results for something, such as assigning children to certain classes, in which case it can be actively harmful, since there is very little evidence to show that IQ test scores are a predictor of anything other than future IQ test scores.
With operational risk we again have a vaguely defined concept, and an arbitrary method of measurement (more below), giving a precise but meaningless number, with which we do what exactly? Which brings me to my next objection:
Measurement is Control
Attaching a dollar value to a particular risk does not mean that we control that risk, unless there is a broader risk management framework in place with the appropriate feedback loops. The market risk function of a bank will run the risk metrics of a trading desk overnight and send a report to the head of trading. If the report says that the desk is running too much risk, the head of trading will order them to cut some of their positions. So risk is controlled, (although that doesn’t seem to stop trading desks blowing up regularly!).
On the other hand, if your OpRisk model tells you that your OpRisk capital requirement has doubled in the last quarter, what can you do? Cut your exposure? To what, exactly? There is no feedback loop between the risk measure and exposure. Instead you have the illusion of control given by the preciseness of the numbers presented.
Bad Maths, Worse Data
A lot of financial maths is borrowed from other fields. Physics mostly for obvious reasons, but biology, computer science, statistical quality management and others also get a look in. If you are an indifferent mathematician, or just one with too much on your plate, when faced with a new problem it’s very tempting to look for a superficially similar problem in another field, copy a model and throw some data at it.
For example, we can count the number of operational risk losses we observe over a period, divide by, er, well, we’re one company, so one. Then let’s look at the loss amounts, they seem to be pretty small, not sure what the regulator are so worked up about. Oh, I guess that type of loss is rare, so we haven’t seen one. Oh, there’s industry data you say? Ok, so how many events per year? And how many companies? What do you mean you don’t know? Well, maybe we can just make an assumption, let’s have the loss amounts. I see, some of these are huge, that would put us out of business three times over! I don’t remember that, it must have been in the news. Unless it was one of the majors, it would be rounding error for them. So who was it? Oh, we don’t know that either? Er, ok, well, let’s make an assumption… Hey, maybe we should fit a GEV model instead?
Blinded by Science & Confirmation Bias
Management will have a ball park figure in mind for the Op Risk capital requirement. Probably something less than the Standard Formula requirement (they run a tight ship, right?). They also don’t have the time to dig in to the mathematical details of every model their staff present them with. That’s what they have staff for, to delegate to. So if a smart young kid comes to them with some complicated looking maths and a number not too far from the one they had in mind, they’ll probably nod, approve the number, and carry on with any number of more pressing issues.
Does It Really Matter?
I think that it does, but maybe not for the reasons that mis-stating your market risk (say) matters. The issue is that I don’t believe there is any “correct” value for a firm’s Op Risk capital requirement, and acting as if there is encourages the mis-use or mis-understanding of quantitative risk modeling.
None of this is to say that operational risk management is pointless or unnecessary. One should definitely try to minimize the chance and impact of internal and external fraud, mistakes in transaction processing, rogue trading, etc. But you do this by hiring appropriate people, designing good processes with Murphy’ law in mind, and periodically re-evaluating your people, systems and procedures. If you have to assign a number (and maybe assigning some extra capital would be a good idea for some events, even if it just means more money for the external fraudsters to steal), why not just take 1% of reserves plus 5% of premiums? It’s simple, easy to understand, no less arbitrary than any other model we’ve seen, and above all, it’s honest.