Wednesday, 30 January 2013

Model Validation - I wouldn't have started from here if I were you...

Based on conversations with clients and other consultants, it seems that model validation is on everyone's minds, at least to the extent that anyone is still thinking about Solvency 2 and IMAP at all. This prompted me to write a couple of bullet points:


  • The timing and format of many Solvency 2 model validation exercises is recreating a flaw common to consulting, software development and other projects: one does the work, then tries to give some assurance that the work is correct. The basic assumption seems to be that the model is free from error, one just needs to present some evidence to that effect. This runs completely contrary to Popper's philosophy of science, under which a given statement is viewed with suspicion, and rigorous attempts are made to prove the statement false. If it survives these attempts the statement gets put on philosophical probation, pending  the possibility that a future test would show it to be false.
  • On the other hand, validation should be a collaborative process. This isn't a chance to show how smart you are by picking holes in someone else's work. It should be a chance to improve on the implemented model by testing, if necessary redeveloping, and testing again (more on this below).
  • The above two points result in a political and governance balancing act. Do you give the validation exercise to a different team from the implementation group, risking corporate harmony, or do you keep it in the same team, with the risk that validation will involve justification rather than testing? Governance "best practice" (e.g. the FSA-recommended three lines of defence) would suggest the former. The best way forward might then be to hire or appoint collaborative people. Apply a "no jerks" policy.
  • Once the exercise gets rolling it is almost certain that the validation project will require outputs from the model that weren't part of the original requirements, unless validation was planned in full from the start (it never is, at least not in consulting. See below for software development). For example, many moons ago we were involved in a testing exercise for an exotic option pricing model. The output from the model was simply the option price. But in order to test that the price was correct we wanted to see more outputs, for example, the discounted value of the forward, the mean drift, etc. So some level of redevelopment / code refactoring takes place, generally just to output values which are calculated anyway, but not stored anywhere. This is where spreadsheet replacements like Mo.NET and Prophet are excellent - since they can and do store each calculation "cell", the details of the calculation are all available by default.

At PSY most of what we do is model implementation, and we try to use good practices from the field of software development, and encourage our clients to use them. Test driven development is pertinent to this topic, unfortunately it's a bit late to be adopting the method at this stage. Basically before any code is written someone (possibly the developer, possibly someone else) decides on a range of tests which the code has to pass in order to be accepted. Code is then developed with the aim of passing the tests, the desired calculation result being almost a side-effect of the process. In retrospect many projects would have been better run had the validation criteria been specified up front, before any model development was done. 

Something for the next big project perhaps.








No comments:

Post a Comment