State Space Models

All state space models are written and estimated in the R programming language. The models are available here with instructions and R procedures for manipulating the models here here.

Monday, December 2, 2013

Signal and Noise: The Rating Agencies



In The Signal and the Noise, Nate Silver is convinced "...that the best way to view the financial crisis is as a failure of judgement--a catastrophic failure of prediction" (p. 20). The big three Nationally Recognized Statistical Rating Agencies (NSROs), Standard & Poor's, Moody's and the Fitch Group, gave their triple-A rating to mortgage-backed securities that turned out to be junk. The triple-A rating were taken as a "forecast" that the securities had very low risk of default. The ratings (forecasts) were based solely on models since CDOs (collateralized debt obligations) had little track record and limited markets.

This is not the first time that reliance on theoretical models produced catastrophic results. In the late 1990's Long-Term Capital Management (LTCM) collapsed and had to be bailed out by the US Federal Reserve. LTCM was a hedge fund that traded securities based on the Black-Sholes model.


In both cases, theoretical models made predictions that were wrong. In both cases, the culprit was that the models were typical, myopic academic models that did not take account of system effects. The models used probability distributions that assumed securities were independent. When the system collapsed, it took the narrowly conceived models and the associated securities along with it--well, maybe just the securities, the models are still being used.



So we have two extremes here: (1) markets are perfectly efficient (the Efficient Market Hypothesis) and produce the most accurate prices and price forecasts (in futures markets?) or (2) precise mathematical equations based on Econophysics are perfect and produce the most accurate security prices and forecasts. Each of the extremes has a contradiction: perfectly efficient markets are random walks which can't be predicted and precise mathematical pricing models using normal distributions only hold when there are a large number of observations, the precise conditions when a market should produce a better result (for more on this, see my post on markets, models and forecasting here).

Chapter 1 of The Signal and the Noise not only discusses the NSROs but brings up a long list of other issues raised by the Financial Crisis of 2007-2008:
  • The NSROs seemed to not have recognized that the period before the Subprime Mortgage Crisis was a housing bubble even though they studied the possible effect of a bubble. Are there tools they could have used to identify the development of housing bubbles?
  • Risk is something that can be assigned a specific probability like the probability of drawing one card out of a 52 card deck (1/52). Uncertainty is much more difficult to measure but may be more important for real-world outcomes.
  • Financial leverage seems an important indicator of problems in the Financial system but doesn't seem to be monitored or controlled in any meaningful way.
  • Policy makers (Larry Summers, in this case) seem to recognize that there are feedback loops in the economy, supply-and-demand being a negative feedback loop controlling prices and fear-and-greed being a positive feedback loop creating bubbles. Unfortunately, this is as far as system thinking seems to go and it begs the question of whether these positive and negative loops actually control the economy in the way economists seem to think.
  • The U.S. Congress passed a fiscal stimulus in 2009. The White House promised (and Keynesian theory predicted) that the stimulus would reduce unemployment below 8%. It didn't. Unemployment reached 10.1% in early 2009 and didn't approach 8% until the end of 2011. Was the stimulus a failure or did it prevent unemployment from actually getting worse (I've blogged about the stimulus here)?
  • Nate Silver suggests that over confidence in forecasts for housing prices, CDO ratings, financial system performance and unemployment might be the result not of bad models but of sampling problems. Forecasters typically use time series data from periods of "normal" economic growth. Events like the Great Recession were simply out-of-sample. 
To demonstrate the last possibility, Nate Silver offers the following graphic (page 46):


A false sense of confidence in forecasts comes from predictions that seem precise but are not accurate. It's as if you have a weapon that produces a tight pattern of shots that are always off target (the third target from the left above, the other three display some other possibilities). The graphic above is really a demonstration of the concepts reliability (tight pattern) and validity (on target). These are important statistical concepts but I think the forecasting problem is far worse and involves the models and how they are used. In the graphic above, we know we're at a rifle range and are shooting at targets. If precise mathematical models don't accurately describe the real underlying system, it's as if you had a great understanding of your weapon but no idea of where your were or what you were shooting at.

It will be interesting to see if Nate Silver reconciles all of this in future chapters of The Signal and the Noise which will be the topic of future posts. It will also be interesting to see if anyone from the NSRO's gets convicted of fraud.