State Space Models

All state space models are written and estimated in the R programming language. The models are available here with instructions and R procedures for manipulating the models here here.

Friday, December 1, 2017

Can every country have the US standard of living?


The field of Development Economics is based on the idea of Convergence: Because underdeveloped economies have faster growth rates than developed economies, all economies will eventually converge in terms of per capita income (taken as a proxy for the standard of living). Convergence holds out hope to developing economies: adopt Western economic models, open your economies to global trade and eventually your citizens will enjoy the same high standard of living as the US. 

Unfortunately, the Convergence model is based on three faulty assumptions: (1) Every economy has basically the same underlying economic model differing only in parameter values, (2) We only need to consider economic variables e.g., Gross Domestic Product (GDP) and (3) There is no such thing as a world-system, we only have isolated countries that can interact independently through global trade.


The first two assumptions can be summarized with the Neoclassical Economic Growth Model (the Solow-Swan Model with a Cobb-Douglas production function).
The causal directed graph (path diagram) for the model is displayed above. One portion of the population (N) is employed as labor (L). Labor and exogenous technological change (T) drive output (Q). Capital stock (K) and Energy Consumption (E) are endogenous variables, that is, produced through economic activity. The final output is Consumption (C).  If the model is estimated from data, there can also be error terms and shocks (V2 and V3). Sometimes land (NR, natural resources) is included as an input, but often resources are ignored.

Every country is assumed to have the same basic economic models (see for example the William Nordhaus DICE and RICE models) differing only in parameter values (rates of population growth, rates of technological change, labor productivity, rates of investment, etc.). If you accept the model, it is easy to reason that rapid population growth and rapid technological change will lead to higher capital investment, higher consumption (but not necessarily consumption per capita) and higher energy use. Since there is typically higher population growth in less developed economies and since technology (knowledge) is a public good, then the predicted catch-up or convergence follows directly from the model. Needless to say, not all economists agree with the model or agree that it is supported by data, but enough do so that it contains the dominant thinking on economic growth. 

The model is myopic; it ends with consumption and energy use but does not consider the environmental impacts. The assumed counterfactual is that all countries can reach the US standard of living without environmental impacts. We can include a measure of environmental impact by adding the Ecological Footprint (EF) to the model (but see the WARNING note below). The EF measures the human demand on nature. It compares human consumption of environmental resources (demand) to biological capacity (BioCap in the directed graph above, environmental supply). Biocapacity is the biologically productive area within the country, a measure that is different from total land area because some land is unproductive (e.g, the majority of land underneath major metropolitan areas or in deserts). The ratio of consumption per capita to biocapacity per capita measures the EF or carrying capacity of the physical environment. If consumption exceeds biocapacity, the level of consumption is not sustainable unless supplemented by trade or unless technological change increase biocapacity. Obviously, not all countries can exceed biocapacity and make it up through trade. Carrying capacity without trade can be exceeded in the short run but is eventually unsustainable because the environment continues to loose biocapacity (the self-loop in the directed graph), that is, looses the ability to meet the demands placed on it.

The EF for the world system is displayed in the first time series plot at the start of this post. Somewhere around the 1990s, the world system supposedly exceed it's carrying capacity (EF > 1.0). The world system did not collapse in the 1990s but, by this measure, we have been degrading our environmental support systems since then.

We can run some simple counterfactuals with EF data. For example, the graph above assigns US consumption levels to every individual in the world population but assumes no improvement of biocapacity. By 2020 (forecasting using the WL20 model), we would need the current biocapacity of almost five Earths to meet consumption demand.
If we were to assume that biocapacity of the entire world system reached the current biocapacity of the US, we would top out at around 2.5 Earths by 2500. It seems unlikely that biocapacity will reach US levels throughout the world, especially in arid countries. A reasonable prediction might be somewhere between these two forecasts.

The conclusion from this exercise is that convergence between all the economies in the world-system is seems unlikely. The US lifestyle is, in this sense, unsustainable. Either the US (and a few other Northern countries) will have to reduce its standard of living, be forced to reduce its standard of living (the ecological collapse after 2050 in the first forecast) or it will always have dominant economies. Even if the US would gladly reduce its standard of living to some low level (it's unlikely that any economy would) what would that level be and how many people in the world system could share it without degrading environmental systems? And, what will happen when the less developed world realizes that there is no hope of sharing Western standards of living? And, what might the world look like after an ecological collapse? More importantly, since the future really cannot be known, what do less developed countries do in the short run? I'll address that question in future posts.

WARNING: The Ecological Footprint (EF) is a measure which has been widely criticized and, at one extreme, called scientifically useless. From a statistical perspective, these critiques describe construct validity: does the EF construct measure what it claims to measure. There are many arbitrary assumptions in the construction of the EF (that is, the one supplied by the Global Footprint Network and used above) and the EF has become overburdened with sustainability interpretations that make it hard to know what is "supposed" to be measured. But there are other types of validity: face validity (does the measure superficially look right), content validity (are the right indicators being included in the measure) and criterion validity (is the measure useful in models and is it related to other measures in a reasonable way). My interest has been in the criterion validity of the EF. As can be seen above, it is useful in models and can be predicted (the dashed blue and green lines are the 98% bootstrap prediction intervals). Does it really mean that we might use five times our current biocapacity at some time in the future? No! If we give up the idea that this must be an absolutely correct measure, we can still ask relative questions that are interesting: how does it change over time and in different countries? How is it related to other measures of economic development? Can we construct alternative EF measures and how do they correlate to the one provided by the Global Footprint Network. I'll present some of this analysis in future posts.

Saturday, October 28, 2017

No, Q3-2017 3% GDP growth does not support Big Tax Cuts!


The Washington Post recently published an article titled Third quarter's strong economic growth could boost GOP tax effort. The article predicts the Trump administration will make the case that if you want economic growth to keep increasing to 4% (President Trump's goal), we need tax cuts. The Financial Forecast Center (here) predicts GDP Growth Rates out to the end of 2017 (graphic above). The forecast graph does not even have room for 3%, let alone 4%, GDP growth this year. Who is right?

The GOP tax cut plan is based on a long chain of reasoning. Tax cuts are supposed to increase investment and consumption (I = iY - T and C = cY - T). Investment and consumption are supposed to increase National income (as does Government expenditure and the Balance of Payments, Y = I + C + G + BOP). On the other hand, tax cuts increase the deficit D = G - T which is supposed to crowd out investment through the interest rate effect (increased interest rates discourage borrowing). Any one of these effects could fail to materialized or provide only a short term boost to the economy when the Deficit chickens come home to roost.


Let's ask a more systemic question: What kind of growth rate is the US economy capable of sustaining? Another way to state the question is to ask what is the attractor path for GDP growth? The  attractor path for the annualized growth of US GDP (based on quarterly data, here) is displayed above. Out to 2030, the attractor path (the dashed red line based on the state of the US economy from the USL20 model*) is stable at around 2%. The 98% bootstrap prediction intervals (the dashed green and blue lines) suggest that numbers from 1.5% to 2.5% are probable. Growth rates above 3.5% or even negative are possible but the economy will return over time to around 2%. This is the growth rate that the US Economy can reasonably sustain given the current physical structure and economic organization. 

The real growth of the US economy, Q = f(K,L,Tech), is based on how capital, labor and technology are combined. Changes in who has money may or may not make a difference to the physical structure and economic organization of the US economy. The excess money available from tax cuts (especially if directed at the wealthy) can be spent on luxury goods or stock market speculation, neither of which will build economic capacity through investment and employment. 

If the fractured GOP is able to legislate tax cuts, time will tell how the US economy is affected. The most likely prediction is that high growth rates, if they happen at all, would be very temporary.

_____
* The best model (determined by the AIC criterion) is ag(GDPQ)(t) =  (F)ag(GDPQ)(t-1) + G(S)(t-1) + Qe(t), where ag() is annualized growth, F, G and Q are coefficient matrices, S is the state vector from the USL20 model and e(t) is random error. The current upsurge in US GDPQ is driven by e(t).

Saturday, September 3, 2016

Coal Will Make Reducing Energy Intensity Difficult


The New York Times ran an article on Aug 30 titled The Challenge of Cutting Coal Dependence. The article focuses on the problems Germany, "a leader in the push against climate change", is having reducing dependence on coal production. Coal is Germany's main and dirtiest source of electricity generation. And, the same is true for many other countries, for example the US, China, and India. The problem is how to replace all the jobs that would be lost. No one has a practical answer. 


 The graph above of coal consumption over time (from the NYT article) shows that Germany and the US have basically stabilized their coal consumption. The rest of the world, on the other hand, has not. My main question, when I read the article, was what can we expect from the future.

The graph at the beginning of this post is a forecast of coal production based on the WL20 model (the forecast assumes no policy intervention in the future). It shows that, with relatively narrow bootstrap 98% prediction intervals, coal production will not stabilize until well after the year 2100.
Compare that forecast to the one, also drawn from the WL20 model, of oil production. The model predicts that oil production has peaked, again with a high degree of confidence, and will decline for the foreseeable future (with or without policy intervention).

While we have been optimistic about the role reduced oil production will have in future carbon emissions, we have missed the major roadblock to reducing carbon intensity. Coal is a plentiful and easily obtained resource. Mining coal creates jobs. The NY Times article concludes with a quote from Craig Morris, an environmental blogger: "Several degrees of warming by 2100 may sound scary, but not nearly as much as long-term joblessness just a few years from now."

Friday, June 17, 2016

Should Britain Exit the EU (Brexit?)



The PBS News Hour featured a segment tonight (above and here) asking whether "...the economic cost of Brexit is too great?" Brexit stands for BRitain EXiting the European Union. The United Kingdom European Union Membership Referendum will be held on June 23, 2016 to decide the issue.

In the video above, the News Hour presented an interesting debate held at the Oxford Union where heavy weight politicians made the case for and against Brexit. The arguments are interesting and well stated but seemed to be based on the idea that "since no one can know what will really happen," the issue must be resolved by debate. In the end, most of the students attending voted to stay in the EU.

The reason "no one can know the outcome" is that Brexit involves a counterfactual. No country has ever exited the EU and there is no historical experience that can be applied to decide what might happen if Britain did. As readers who follow Fact, Fiction and Forecast know, historical data can be applied to the question if you have models of both the British and the EU economies and if those models can be simulated under different conditions. The challenge is to choose those "different conditions" in a convincing manner.

Without going into a great deal of detail, two state models are available: UK20 and EU20. For the late 20th and early 21st century, the EU20 model is primarily being driven by the world system (outputs for the WL20 model) while the UK20 model is primarily being drive by outputs from the EU20 model. One might easily jump to the conclusion that since the UK20 model was primarily driven by the EU20 model, the logic of staying in the EU is obvious. However, the real counterfactual question is what will happen in the future.

To pose this question, I simulated the UK20 model being driven by the EU20 model and then simulated a version of the UK model with no inputs (the Go-it-Alone scenario). If the EU has been holding back the UK, this comparison would demonstrate the drag being placed on the UK by EU membership. Go-it-Alone is not the only possible strategy for the UK (I'll talk about that below) but these two models are actually the best models for the UK economic system when compared against a number of other competitors.

The graphic above is the attractor path simulation of UK GDP using the state of the EU20 model as input. The red dashed line is the attractor path. The green and red dashed lines are the 98% bootstrap prediction intervals. With a high degree of confidence, the model predicts that the British economy will peak sometime in the 2030s.

The next graphic above is a free simulation of the UK20 model starting in 1960 with no inputs (the Go-it-Alone scenario). In the alternate future, the EU economic system is predicted to peak in 2020 and decline rapidly after that. There is some probability that the economy might peak somewhat later in 2035 (the dashed green line), but there is a higher probability of significant decline after 2020. Also notice that the confidence intervals are wider meaning that this is a less precise prediction.

GDP isn't the only criterion measure we might look at (What about labor force issues? If you are interested, let me know). And, there are many other strategies Britain might choose after leaving the EU and Go-it-Alone is only one. Britain could choose to aline itself either with the US or with the entire World System, bypassing the EU. I have also estimated these alternative models and they are inferior to the ones presented above meaning that the prediction intervals would be even wider.

We will all have to wait for the referendum results on June 23, 2016 and then have to wait again for 2020, 2030, 2040 and 2050 to see what the future may hold. Myself and many of the "heavy weight politicians" who argued the case at the Oxford Union will no longer be alive to see the future that unfolds but many of the students will. Their intuitions, expressed in their votes, seems to favor staying in the EU (as does the counterfactual simulation, the fiction and the forecasts presented above).

EXTRA CREDIT

Assume that Britain stays in the EU.  There are people who now favor Brexit who will argue, at the first signs of slowing in the UK economy, that the reason is having chosen to stay in the EU. What will you say to them?

Monday, June 1, 2015

Population Bomb: Forecast Gone Wrong?


Today the New York Times published a Retro Report titled The Unrealized Horrors of Population Explosion accompanied by the video above. The report takes a look at predictions made by American biologist Paul Ehrlich in his 1968 book The Population Bomb. In the book, Ehrlich presented future scenarios (he clearly stated they weren't predictions) that, for example, by 1970 hundreds of millions of people would starve from population growth increasing faster than food production (basically a restatement of the Malthusian Catastrophe). It didn't happen and, regardless of Ehrlich's protestations, his "predictions" have come to be viewed as notorious examples of forecasts that proved wrong. But more is at stake here than Ehrlich's forecasting ability.

Population growth is the primary exogenous forcing variable in the IPCC Emission Scenarios, in Limits to Growth Models and in Neoclassical Economic Growth Models that do not recognize any limit to growth (see my discussion here). If Ehrlich's predictions were wrong does that mean that there are no limits to growth and that we don't have to worry about CO2 emissions? I'll let you watch the video, read the New York Times article, follow the other links and think about this question.

In future posts, I'll make my own forecasts for World population and look at what some of the consequences might be.

Sunday, April 19, 2015

Is the US Printing Too Much Money?


The Federal Reserve, the central bank of the US, has the power to print money. The US has just been through the Financial Crisis of 2007-2008. As a result of the Financial Crisis, the US Federal government has gone into debt both to maintain operations in the face of decreased tax revenue and to stimulate the economy. The Federal Reserve could simply print money to erase the Federal Debt but the fear is that printing money will lead to inflation

In this post, I look at this issue using statistical models based on Complex Systems Theory and World-Systems Theory. The models show that the US has not printed too much money (but could at some point in the future and has at times in the past) and that the money supply has historically had little to do with inflation as measured but the Consumer Price Index (CPI). Other forces in the world-system are at work here, not just the policies of the US Federal Reserve.

Printing money has been a contentious issue throughout US History and the current episode is no different (if you want to read in more detail type Is the US printing too much Money into the Google search engine). Monetary theory is also a contentious area in macroeconomics. If I tried to summarize  the area, you would instantly stop reading this post. 

Let me just mention one theory that is easy to understand and applies to the question at hand (most monetary theory doesn't). The theory is Milton Friedman's k-percent rule. Simply put, the central government should increase the money supply at some fixed percent, the k-percent. Contrast Friedman's theory to Keynesian counter-cyclical policy: the money supply should be increased during recessions to stimulate the economy and decreased after the recession to prevent inflation. The problem with each of these theories is "how much." How much should k-percent be or how much should the money supply be increased during a recession and decreased afterwards?

The "how much" question could be rephrased in a way that would be understandable to Stock Market Analysts who used technical analysis. The figure above is the US M1 Money supply (the definition of the money supply that is under government control) taken from the Financial Forecast Center (FFC). It includes actual data starting in April 2012 and a forecast that starts in 2015. The forecast is made using artificial intelligence techniques, not economic theory. A simple form of technical analysis would just connect the high and the low points for M1 over a period of time (the dashed green and blue lines). The argument is that if M1 goes outside this range, it is changing too much. Using this form of analysis, what tends to scare analysts (the red arrow in the graph) is when M1 increases rapidly as it did after Dec-2014. A problem with the graph above is its limited historical scope. We'd really like to look further back to set reasonable ranges and decide how M1 has fluctuated historically. In any event, the FFC is forecasting a peak in M1 for 2015.


The figure above shows M1NS (M1 not seasonally adjusted) from the Federal Reserve. We can see that the money supply expanded during the Dot-com Bubble but remained fairly flat until 2009. Why did M1 increase during the Dot-com Bubble and what would have happened had continued increasing (line A) rather than flattening out until 2010? Were the sharp increases in the money supply (lines B and C) after the Financial Crisis justified or something to be feared? And, what are the dashed green, red, and blue lines in the figure?

The dashed green and blue lines are the 98% bootstrap prediction intervals for the dashed red line, which is the attractor path for M1. The attractor path is the simulated time path of M1 derived from a state space model of the US economy. It shows what M1 would have been (a fictional line) without random shocks (the black line is the fact line). The attractor path is the line to which M1 will return without random shocks. The conclusion is that from before 1980 until 2000, M1 was too high. After 2000, until 2012, M1 was too low. As of 2012, M1 was right on the attractor path; if it stays there increasing at k-percent per year, M1 will be just right and it cannot be said that the US is printing too much money.


Now let's look at the US Inflation Rate as measured but the Consumer Price Index (CPI). The graph above is another forecast from the Financial Forecast Center (FFC), this time looking at the rate of change in the CPI. There have been a lot of increases and decreases in the CPI since Apr-12. Each increase (solid red arrow) could have been used by commentators to trigger fears of inflation. Technical analysis shows that the swings are increasing but have never peaked much over 2% while the FFC forecast is for essentially zero inflation after Dec-2014. Had the US been printing too much money and had all that money printing created inflation, we should have seen it here and we don't.


The forecast above is for CPIAUCNS (CPI for All Urban CoNSumers), again from the Federal Reserve. In this case, the model is forecasting the level of the CPI not the rates of change. It's very easy to see that the CPI is on the attractor path and well within the 98% prediction intervals, unlike the M1. You can pick particular blips (for example the red arrow) and become worried about inflation but the blips are random variation, all within probable ranges. 

The fact that the dynamics of M1 and the CPI are very different means they are being driven by different forces. The M1 is best explained by the state of the US economy and the CPI is best explained by the state of the World system. This should make some sense since the US is a globalized economy that controls its currency through the Federal Reserve and is at the same time the hegemonic leader of the World-system. These issues seem to escape most monetary models and economic models of inflation.

NOTE: In case you are wondering how good the state-space models are at predicting M1 one-month into the future (the typical criteria for econometric models), the forecast graph is presented below.


The models do an excellent job with very tight prediction intervals, getting wider of course into the future. The two models used for the forecasts are the USL20 model and the WL20 model. The USM1 models is here and the US CPI model is here. Explanations for how to use the models are available here.

QUESTIONS FOR FUTURE POSTS:
  1. What are the forces in the US Economy and the World System that drive monetary policy?
  2. Why was the M1 too high during the Dot-com Bubble and too low afterwards?
  3. During the Financial Crisis of 2007-2008, M1 growth was pretty flat. Was the US Federal Reserve trying to pop the Subprime Mortgage Bubble?
  4. What would be a reasonable value for Friedman's k-percent? In 2015, the annualized growth rate of the M1 attractor was about 5%. Should the value of k-percent increase, decrease or stay the same in the future?
  5. What are the forces in the World System that drive inflation?
  6. Did the US recently go through a Debt Crisis similar to ones in Europe and Latin America?
  7. Would harsher Austerity Policies produced a better or worse outcome in the US? Are stronger Austerity Policies needed in the future? 
  8. What about the performance of Federal Reserve policy instruments such as the Fed Funds Rate?
  9. What about the behavior of interest rates and the Zero Lower Bound problem?


Thursday, February 26, 2015

Does Incarceration Reduce Crime Rates?


Today, the Center on Budget and Policy Priorities posted the above graphic on Twitter (here) suggesting that the huge rise in the Incarceration rate (yellow line) had little impact on either the Violent (blue line) or Property crime rates (gray line). My colleague, Riccardo Fiorito, posted a reply on Twitter (here) suggesting (well more than suggesting, he actually offered an elasticity coefficient) that maybe there is some small effect. Wisely or not, I also replied suggesting that a time series model could provide a test of the idea.

I was able to find the data on which the CPB graph was based and started developing a state space model. My first inclination was to include both total crimes (adding violent and property crimes together) and the total number of prisoners both as dependent variables. That is, crimes and incarcerations form a system: crimes generate some incarcerations for those caught, tried and convicted and incarceration rates must send some message to criminals (imagine if no one was caught, tried and convicted). I also tested a model where total crimes was the single dependent variable and incarcerations was the single independent variable. And, I tested two other models controlling for World and US economic conditions. Without entering the debate about the role of economic conditions, if there is some relationship between poor economic performance and incarcerations, I wanted to control for the effect. Finally, I estimated total crimes and total incarcerations rather than rates as presented in the CPB graph. I was not sure what the "rate" represented (per 100,000 population, per 100,000 adult male population, etc.) so I used the raw numbers (a rate model could be estimated later if anyone is still interested).


The best model was chosen using the lowest AIC (Akaike Information Criterion) statistic. The models were all estimated in R using the dse package (I can make the models available if anyone is  interested). The best model was the systems model (total crimes and incarcerations as the output variables) controlling for economic conditions in the World System. The US is a globalized country and controlling for conditions in the World economy is a bit more general than just controlling for US economic conditions.

The best way to understand the estimation is from the Impulse Response graph (above). The two plots on the upper part of the figure show the impact of a one-time increase in crimes on both crimes and incarcerations (controlling for World economic conditions). What is interesting is that it takes the law enforcement system about four years to respond to a one-time shock in crime with increased incarcerations. You can also see that incarcerations increase disproportionately at a five-to-one ratio (an increase in one crime creates five more incarcerations fours years in the future, Riccardo thought the lag length might be two years). The lower panel shows the effect of an increase in incarcerations on the total crimes. Incarcerations do decrease the crimes but the effect is very small (and non-significant using bootstrap t-statistics).

So, in summary, incarcerations increased so dramatically because the criminal justice system responded disproportionately to increase in the crime. The effect on criminal activity was slight, possibly because it takes the criminal justice system so long to respond positively (a four year lag seems to insure that the reaction is quite divorced from the cause).


All this might be moot as can be guessed from the CPB graph. My forecast for the future is that both criminal activity and incarcerations will drop to quite low levels (but notice the upper 98% bootstrap prediction interval, the dashed green line) by 2040.