I think you should read Irwin’s article on Gustav Cassel

17/Feb/2014 2 comments

Last week Marcus Nunes linked to a paper on Gustav Cassel by Douglas Irwin: “Who Anticipated the Great Depression? Gustav Cassel versus Keynes and Hayek on the Interwar Gold Standard”. I read the paper today and must say that Cassel has got to be the most under appreciated economic thinker of the 20th century.  Cassel foresaw the Great Depression and then correctly diagnosed it’s solution, as the depression was happening! I had some rough memory that Cassel had been involved with Sweden’s highly successful price level targeting scheme in the early 1930s, but I didn’t know that he was so prominent or that he’d tried to ameliorate the flaws in the  gold standard between the wars. What’s even more remarkable is that I have an economics degree from Cassel’s alma mater and can’t recall ever hearing a word about him in class!

How is it that Cassel is nearly forgotten while the long-winded (and incomprehensible) von Mises gets his own internet cult, to say nothing of Lord Keynes sainthood? I guess its because, while Cassel got it right, he was not heeded by policy makers in the big economies. I’d bet his stand against fiscal stimulus didn’t help his popularity either. Any young economist looking out for his career in the 1930s or 40s would naturally gravitate to a vulgar Keynesian view. After all, a socialist Soviet Union crushed a socialist Nazi Germany so that proves central planning rocks right?  Going around saying that monetary policy controls the business cycle is likely to be a lot less effective than making up multiplier estimates. Sadly, this is still true.

My point isn’t that we should necessarily hero worship Cassel, but that it’s a bummer his line of thinking was essentially forgotten for 30 years.

Categories: Gustav Cassel

About that jobs number…

8/Feb/2014 Comments off

Ok, so that weak December jobs number stands after the second estimate. Third time the charm? Maybe…

My excuse is that, had the number been revised upward, I’d have looked insightful. Was worth a shot.

In all seriousness though, it’s important not to get too worked up about government data releases. No one wants to hear this but the fact is that we really only have a good read on the economy as it stood six months back.  Until the NSA decides to start sharing data with the BLS and BEA that’s just the way it is. This is why I don’t feel bad about not remembering what the jobs number was today. Did the markets move? Did EfficientForecast budge? No. So who cares?

I think traders grasp this better than economists, and this is why I generally value the insights of hedge fund types a lot more than economists.  

What was it that v. Hayek said again?

Categories: Uncategorized

I don’t believe the December jobs number

10/Jan/2014 Comments off

In general, U.S. government data are released before they’re ready.

If things are going to be reported as facts, we shouldn’t ignore evidence which suggests they are likely to not be facts. We know that when it comes to the BLS’ headline payroll number that the expected absolute gap between first print and third print is potentially huge. Hell, the gap is still big between the third print and the yearly “benchmark'” revisions.

The early payroll and GDP numbers don’t mean much to me unless there is reason to believe they’ll affect the Fed’s behavior.

This 74 thousand number means almost nothing to me.

Categories: Uncategorized

Markets dig the taper

24/Dec/2013 Comments off

I haven’t yet read what others have to say on the subject, but it seems to me that markets like the taper.

The taper shows that the Fed can produce a ‘QE like’ effect on expectations, by simply speaking. The Fed’s actions seem to have slightly boosted the outlook, though NGDP will probably still grow around 4% to 5% per year in the near future. This is to say, the Fed has still not given enough stimulus to hasten the output gap’s closing. Still, they’ve taken a step toward crafting QE-free policy, which is the biggest news in a while. Essentially the Fed needs to find the will to voice forward guidance in a way that offsets  the exceptional effect that a ‘normalizing’ monetary base would stir up.

NGDP graph

Any stimulus they can give is welcome. The media focus on the GDP number, but the GDI  number from last week wasn’t especially strong, only up 4.5% from last year. This is a respectable number under the post recession regime, but also nothing particularly encouraging.

BTW, Merry Christmas.

Fama’s ideas

19/Oct/2013 Comments off

Check out this 2010 paper: My Life in Finance by Eugene Fama.   It’s a good overview of everything Fama’s done, written by the man himself. Basically, its a reading list for me for the next year.

The paper makes me realize how little I know about finance as opposed to the related field of international macro, where I can always fall back on MV=PY, AS/AD, EMH when things get murky, and come out with sounder conclusions than those stuck with ‘interest rates’ -> ‘change in rates of growth in real variables’ -> ‘inflation’ paradigm.

One bit which caught my eye was Fama’s pointing out that finance has known about ‘fat tails’ for 50 years. I think all this Nassim Taleb ranting and raving about “Gaussian” this and “Platonic” that is a bit over done (to say nothing of his war on Dawkins and Pinker). You can use GMM to fit credit models to macro data with nasty residuals. Then the issue of fat tails comes down to how imaginative you can be when feeding a stress scenario into said model.  This is just what the BOE and Fed are doing these days, so hypothetically we’ve got the bailout issue under reasonable control.

I’m rambling now, so I’ll close by again urging you to read the Fama overview paper: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1553244  (click the ‘Download this paper’ button in the middle left)

 

Categories: EMH

Simulating NGDPLT with measurement error

23/Sep/2013 1 comment

I don’t think measurement error is much of a hurdle to effective NGDP level targeting. It seems obvious to me, but the ‘moving target’ critique is a common criticism. We could argue about the logic all day, the only way to put it to bed is to run some simulations and see if measurement error really is such a big deal.

I’ve taken a first stab at this.

I’ve found that NGDP stability under a level targeting policy rule is not much affected by the measurement error variance, when using a zero mean normal density to model the error. My simulations also suggest that price level targeting is a poor substitute for NGDPLT, if you care about NGDP stability.

First I’ll tell you how the simulation is set up, and then I’ll show you some histograms which summarize the ‘potential histories’ the simulation made.

How the simulation works

At the simulation’s heart is a factor augmented VAR. You could just as well use a New Keynesian ‘three equation’ model, or whatever you like. All that is important is that you have a model which describes NGDP, prices and monetary policy. NGDP and prices should move together when policy is eased or tightened.

The VAR has six lags on the vector:

D_t = \begin{bmatrix}  \Delta NGDP_{t-1} \\[0.3em]  \Delta P_{t-1}\\[0.3em]  \Delta Score.1_t  \end{bmatrix}

Where NGDP is the log of the average of NGDP and NGDI, P is the log of the personal consumption expenditures index less food and energy, and Score.1 is the first principal component from the following: S&P 100, S&P 500, the dollar index, five-year yield, five-year TIPS spread, three-month copper futures, front-month WTI futures, and the spread between the five-year yield and the 12-month yield.

I picked the number of lags based on what seemed to give the best response to Score.1 shocks. This model seems to be pretty good at making Great Recessions. Here is what happens when I dump a big negative shock into the Score.1_t equation and solve forward:

a shock to the model

In the first step a measurement error (\epsilon_t ) is drawn from a random number generator.

\epsilon_t \sim N(0,\sigma^{\epsilon})

This \sigma^\epsilon is the key to the whole thing. It measures how volatile NGDP measurement error is.
I thought it’d be reasonable for measurement error to decay with time. So after \epsilon_t is pulled, it is loaded into a vector of earlier measurement error draws (in the first quarter of the simulation I made seed values for these). In each new run of the simulation, the measurement errors are moved back a spot in the vector and shrunken by half. The four latest measurement errors are kept, after four quarters I assume NGDP is fully visible.

E_t = \begin{bmatrix}  .125\epsilon_{t-3} \\[0.3em]  .25\epsilon_{t-2} \\[0.3em]  .5\epsilon_{t-1} \\[0.3em]  \epsilon_{t}   \end{bmatrix}

These values are added to the four latest NGDP levels to make the Fed’s life hard. The Fed tends to see more or less the true level on recent NGDP, but the error could be enough to meaningfully change the recent growth trend, leading to a policy misstep.

The Fed uses the FAVAR to forecast NGDP a year ahead, but using the four mismeasured NGDP lags. Also, because it is unfair to give the Fed the ‘true economy model’ represented by the FAVAR, I also add a random forecast error (\epsilon_t^F ) to the NGDP forecast before the Fed ‘sees’ it.

\epsilon_t^F \sim N(0, \sigma^F)

The Fed sets monetary policy using this NGDP forecast, which might be a good forecast, or might not, depending on the total effects of the forecast and measurement error in a given quarter. The Fed compares the forecasted level of NGDP with target for that quarter. The target is given by a 4% yearly trend line from 2013Q2 NGDP forward 30 quarters. If The forecast is above this line, the Fed dumps a tiny negative shock into the score.1 equation of its FAVAR and runs its forecast again (with the same forecast error and NGDP measurement errors as in the first run), checking if the forecast is now within rounding error of the target. It repeats this process, making the ‘shock’ to financial markets a little bigger each iteration, until it finds the financial shock which puts policy right on target. If the first forecast was below target, the process works the same, except the Fed looks for the right sized upside shock to bring up expected growth to target.

So score.1 is the policy instrument. I think this is a useful way to model monetary policy, because it potentially includes everything the Fed can do to affect expectations: threats, promises, shows of cluelessness, QE and interest rate changes.

The Fed’s financial shock is dumped into the true FAVAR and solved forward using the true NGDP values. Random draws from the empirical residuals for each equation in the model are also added. The output from this single quarter solve is treated as new data D_{t+1} and added to the matrix which stores the D_t vectors.

The Fed has now done one quarter’s worth of monetary policy. Next it begins again, in the new quarter, with a new NGDP measurement error, making a new year-ahead forecast, being befuddled with a new random forecast measurement error, and finding a new upward or downward push to financial markets which it thinks will lead to on target NGDP growth. This process is done for 30 quarters. After 30 quarters the simulation ends.

Some graphs

I ran this simulation under a few different parameter values for \sigma^\epsilon , keeping \sigma^F=0.001 (the standard deviation of forecast error). To see how it might stack up against the alternative of price level targeting (which I thought would be the most charitable alternative), I ran a slight variation of the simulation using a 2% per year level target for P_t. This simulation was more or less the same as that outlined above, but instead of targeting NGDP, the Fed makes adjustments to hit the price level target. NGDP measurement error is still in the PLT simulation, so insofar as NGDP is useful for forecasting prices, the measurement error will lead that forecast astray.

Here are two sample runs of the NGDPLT rule and the PLT rule with \sigma^\epsilon = .005 (click to see)

A run of NGDPLT

A run of PLT

Here are the results of those simulations (batches of 500 runs) in histogram format. The variable shown in the histograms is the correlation coefficient of log NGDP and a linear sequence (1,2,…30). If the NGDP growth rate were the same every quarter in the 30 quarter simulation (perfect stability), this correlation coefficient would be 1.0 (exponentials become linear in log format). Using this measure makes it fair to compare NGDP stability in the NGDPLT regime and the PLT regime because it doesn’t force PLT to follow a particular NGDP level path, it just evaluates how steady the growth rate is. If the 2% PLT led to perfect 5% NGDP growth, the correlation would be 1.0, just as it would be under perfect 4% NGDP growth.

\sigma^\epsilon = .001
historgram_point001

\sigma^\epsilon = .002

historgram_point002

\sigma^\epsilon = .01

historgram_point005

You can see that on each measurement error setting, NGDPLT leads to much higher odds of NGDP stability. Not perfectly stable, but there is a tight central tendency on NGDPLT, whereas PLT is widely spread. Interestingly, as NGDP measurement error goes up, the PLT gives worse and worse results. This is because, in this tiny model, NGDP is an important predictor of the price level. As you lose information about the recent NGDP trend, you lose information about future prices, at least in this set up.

In case anyone would think it unfair that I set \sigma^F=.001 for both the price level forecast and the NGDP forecast, I ran another batch of simulations, with forecast error turned off for the price level case and \sigma^\epsilon =0.001. I then cranked \sigma^\epsilon =0.01 for the NGDPLT case. Here is the result:

histogrammixed

Maybe you think NGDP is actually not that useful for forecasting prices. In that case let’s drop that ‘control’ experiment and just look at how NGDP stability changes with increasing measurement error. Here are how the different measurement error settings look for only NGDPLT. I gave up trying to get Greek letters in the legend, the “low” to “Highest” stand for \sigma^\epsilon = .001, .002, .005, .01

histogram_all_ngdplt

I didn’t do anything tricky to get these results. I went straight from 1. Finding a VAR that looked like it had reasonable responses to financial shocks to 2. building the simulation functions around that VAR. These are the first outputs I got.

The simulation is hardly perfect. ‘Macroeconomic models are toys, sometimes toys are useful.’ In reality, if the Fed announced an NGDP level target, they wouldn’t need to manipulate financial market’s like I’ve done here. The Chuck Norris effect would do most of the work, and the Fed might only need to make small adjustments to monetary base growth here and there to maintain credibility. However, my approach gets at the underlying logic. The Fed has a communication tool, NGDP and prices respond to that tool.

I could try a few different types of models and see if the results are sensitive to my choice to use the FAVAR. If you like, suggest a model specification in the comments (link to a paper maybe) and I’ll consider rerunning the simulation with a different model. I’m confident something like this result will show up. If you want steady NGDP, what’s the best way to get it? To try to stabilize NGDP? or to…do something else? I realize that life is full of counterintuitive ways of getting things done, so if there is some trick to getting stable NGDP (target the price of beans) then let’s do that, but let’s do that because we want stable NGDP.

They keep trying to find bubbles

21/Sep/2013 2 comments

There’s an article on Bloomberg worth nitpicking: Asset Bubbles Found by Finnish Economist Inspired by Grandfather.

Here is a quote to give you the gist of what Dr. Taipalus (the economist the article is about, it’s a bit of a personality piece) has done:

Feed in dividend yields and stock indexes, and Taipalus’s indicator signals every major U.S. stock-price bubble since 1871. Input rent indexes and house prices, and it signals when increases in the cost of homes are becoming unhinged.

This is quite a statement, and as I’m an unrepentant market fundamentalist, I take issue with it. I don’t doubt one could build an indicator which looks like it predicts past stock and housing market downswings. What I do doubt is said indicator’s ability to offer any useful guidance outside the sample its inputs were calibrated to fit. There are a few thousand people working in hedge funds, people with massive brains, grinding away with data mining, Bayesian methods and good old fashioned logic and research. These people live to find ways to foretell a drop in stocks. I guarantee you they’ve baked all the information Taipalus is dealing with into equity prices already.

A George Soros quote comes to mind, it goes something like this: ‘Imagine I had a model which forecasted stock prices. As soon as that model went public, prices would adapt to it and it would stop working’. He was talking about his ‘Theory of Reflexivity’, the particulars of which I’ve forgotten, but I often think of the quote when I hear people claim to have some rule for predicting stocks.

As soon as traders find a market inefficiency, the inefficiency ceases to exist. Otherwise anyone could get rich by exploiting the inefficiency. It’s a waste of time to keep going in circles with the EHM like this. Even if Taipalus got rich trading on her indicator (the only way I’d believe it really ‘worked’), publishing the indicator’s methodology makes it useless.

Should we be surprised the blaggards at the European Central Bank [who supported the indicator’s development] keep getting things wrong?

Follow

Get every new post delivered to your Inbox.