Shiller, Tobin and the “This Time is Different”: Analysing the US stock market with the rule of the 0.20


Posted by on

[this is the translation of a previous post]

Robert Shiller, professor of economics at Yale University (and Nobel prize winner in 2013), has published the third edition of his book Irrational Exuberance. Beyond the fact of being a very entertaining and highly advisable book, the most important thing about its publication is not the book per se, but rather the timing: the first edition was published in 2000, just before the burst of the bubble, and the second edition was published in 2005, two years before the US real estate meltdown. In both editions, Shiller warned about the unsustainability of stock market and real estate valuations, respectively, and as it happens, he was right. In this new edition he introduces the concept of new-normal boom, putting his finger especially on fixed income valuations – though he also mentions stock markets. The right timing of the two previous editions makes many analysts wonder whether we are at the gate of another financial crisis.

This post does not intend to review Shiller’s book, but rather to make a contribution to the debate about the stock market valuations of the US economy – a debate that has been raging in the blogosphere for several years. It is worth remembering that there is no similar debate in Spain (in terms of quantity and quality), although it would be highly beneficial. To understand what is at stake in the US debate, the best way is to begin with some numbers. The following graph, courtesy of Shiller,  is a good starting point. It shows the historical evolution of the S&P500 and corporate profits, both of them deflated by a consumer price index in order to improve comparability over time:

S&P 500 and corporate earnings, historical

Own elaboration

It is clear that the basic intuition ‘higher corporate profits – higher prices’ is confirmed (with some important exceptions, as in the 90s). However, it can also be seen that since the 90s stock valuations (how much one is willing to pay for a dollar of profits) have gone up notably: especially, since the Global Financial Crisis in 2007, the S&P500 in real terms has increased more than corporate profits, which has provoked the increase in many valuation metrics. What metrics?

Shiller’s CAPE and Tobin’s q as the judges

Broadly speaking, there are three different points of view in the valuation debate: the first is made up by perma-bulls (always optimistic), people who range from brokers offering financial advice to their clients (and for whom, by definition, the market is never expensive) to eminences as Jeremy Siegel, people who argue that the upward historial trend is the best ally for investors. The second opinion group consists of people who think that current price levels, in comparison to the historical experience, are high; such as Andrew Smithers or James Montier. They use some financial metrics, as Shiller’s CAPE (explained below) or Tobin’s q, to gauge whether the market is expensive or not. Finally, the last group is made up by (due to the lack of a better name) sceptics, people who generally admit that the market may be overvalued and that the use of financial metrics (e.g. CAPE, q) is problematic and they should not be used so blithely (the main arguments put forward are how these financial metrics are built and the changing nature of economies over time).

Such disquisitions have been (mainly) made around three indicators: Shiller’s CAPE, Tobin’s q and Buffett’s ratio. Shiller’s CAPE (cyclically adjusted price-earnings ratio) is a price-earnings ratio, where the numerator is the deflated S&P500 and the denominator is the average over the last ten years of earnings. We have already explained why the S&P500 must be corrected by inflation. The reason why an average is taken for earnings is to smooth the effects of the business cycle (so as to avoid taking a too high or too low figure for profits). The following graph shows the evolution of the CAPE in the US (again, courtesy of Shiller):

Shiller's CAPE, up to April'15

Own elaboration

The reason why the CAPE is considered to be a good indicator to gauge market valuations is the same as why the PER is considered to be a good indicator at the micro level: it measures how much you will pay for the profits of a company. As the graph shows, recent market valuations are almost 30 times the average profits over the last decade, and if we are mean-reversion believers, the index seems to be overvalued around 1.7 times.

Tobin’s q is an old friend for the readers of our blog: it measures the ratio of firms’ market value to their replacement cost (as we explained in a previous post, it would be the cost of starting a new firm from scratch at current costs). It seems that our readers are lucky, because it turns out that q is also considered to be a very good indicator for stock market valuations (thanks largely to Smithers‘ “disseminating efforts”). The following graph shows the evolution of q in the US:

Tobin's q, 1951-2014

Own elaboration

Again, given historical standards, the q theory says that the US stock market is expensive (q series comes from US national accounts, so it has usually a lag of one quarter). The historical average has been 0.71 (perhaps, we will explain in a future post why q has been historically less than 1, in contrast to standard economy theory predictions), while the current level is 1.1. Again, if you were a fervent mean-reversion believer, then you would expect in the medium term q to adjust by 0.40 points – a correction of around 35%.

Finally, we have to deal witht “Buffett’s indicator”. Its name stems from Buffett mentioning it in passing in an interview, and because everything in finance has to have a nickname. It is the ratio of the market cap of an index to the GDP of a country. Usually, the Wilshire 5000 or the value of the corporate sector from the national accounts are taken as proxies for market cap. Here we will use the deflated Wilshire 5000 and the US real GDP:

Buffet's ratio, 1971-2014

Own elaboration

And, guess what, this indicator offers the same conclusion: the US stock market is expensive for historical standards.

Summing up, the main argument for the second group (mean-reversion advocates) is that the main financial metrics have the property of “reversion to the mean”: i.e., the best predictor for where the index will be in the medium-long run (let’s say, between 3 and 7 years) is its own historical average.

This Time is Different: or why higher valuations are the new-normal

Our opinion is that, yes, the US stock market is expensive, pretty expensive actually, but we are quite skeptical about the predictive value of historical averages for this kind of analysis.

Our readers already know that we believe that equilibrium valuations (defined as those valuations where an index is roughly properly valued) change over time and, in particular, that index valuations are not independent from the growth rate of the economy. In other words, it is non-sensical to defend that the historical average will remain immutable in the future regardless of future macroeconomic conditions. However, the long-run relation between stock market valuations and growth rates is not the one that somebody could expect. In a previous post it was explained why one should expect a negative relation between growth rates (measured by real GDP) and Tobin’s q values. This observation fits nicely, for instance, with the US experience. The following graph shows the real GDP growth rates since 1960:

The growth of the US economy has been gradually reduced in the last decades: if in the 60s the average rates were around 3.5%, in the 2000s such rates were barely 2% (several economists adduce secular stagnation issues, something about we already wrote in our blog). Such a reduction in growth rates has been accompanied by a significant increase in the values of q, as the previous graph shows. In any case, the crucial question for the issue at hand is whether a simple relation between GDP growth rates and q levels can be found (both of them measured over long periods of time, say, decades).

The rule of the 0.20

In fact, an empirical approximation for the relation between growth rates and Tobin’s q can be obtained. In other words, we are interested in how much q would change if we change the growth rate by (for instance) 1% – and we keep everything else constant, as economists would like. It turns out that that change can be approximated as:

Change in q = Change in the growth rate * (1/ propensity to consume out of wealth)

Everything that we need to know is the value for the propensity to consume out of wealth. Such a value stands for the percentage consumed out of total wealth owned by people (as shares, real estate, banking accounts, etc.). Obviously, such a value changes over time and differs greatly between countries (actually, it differs even between types of assets, because it is not the same to sell real estate than stocks), but it seems that the specialised literature (here) finds values in the range of 3%-7%, so we think that 5% might be a good approximation. If such a value is taken as a guide, then for every 1% increase in growth rates, q should change roughly by 0.20. For the sake of convenience, we have decided to dub this rule as “the rule of the 0.20” (agreed, nothing to drive home about, but the reasons are obvious). The immediate applications for this simple rule are multiple. For instance, up to the 1990s, the average value for q was 0.55. If we assume then that the average growth in real GDP has been reduced by 1.5% since the beginning of the 1990s, then it means that we should expect a new average value for q around 20*0.15 = 0.3 higher than the historical average up to that point, that is to say, an average value of 0.55+0.3=0.85 (it turns out that the average value since then has been 0.93, so we think that the approximation is reasonable).

We would like to immediately point out that the aim of the previous numerical exercise is not to quantify in any precise way the changes in q equilibrium values or something like that (because we think that it is really difficult and that the previous rule is just an approximation), but rather to point out that one has to be very careful when analysing stock indices using historical averages, given that they are not independent from macroeconomic conditions. The previous analysis, using historial averages, would lead us to conclude that the market is more overvalued than it actually is. Another question is what are the implications for investors and the economy as a whole of permanently higher valuation metrics. The answer is a difficult one and is beyond the goal of this post. But we do think that if US growth rates do not go up on a permament basis, then high valuation metrics (not as high as the current ones, but the average of the decade) have come to stay. Keep an eye on q then. This time is different.

[note: as Fermat would say, we have a wonderful proof of the approximation of the 0.20, but the margins of this blog (or rather, our readers´ patience) are not wide enough to explain it. I beg to our patient readers then to ask us to clarify any possible concerns.] 

Leave a Reply

Your email address will not be published. Required fields are marked *