About that jobs number…

8/Feb/2014 Comments off

Ok, so that weak December jobs number stands after the second estimate. Third time the charm? Maybe…

My excuse is that, had the number been revised upward, I’d have looked insightful. Was worth a shot.

In all seriousness though, it’s important not to get too worked up about government data releases. No one wants to hear this but the fact is that we really only have a good read on the economy as it stood six months back.  Until the NSA decides to start sharing data with the BLS and BEA that’s just the way it is. This is why I don’t feel bad about not remembering what the jobs number was today. Did the markets move? Did EfficientForecast budge? No. So who cares?

I think traders grasp this better than economists, and this is why I generally value the insights of hedge fund types a lot more than economists.  

What was it that v. Hayek said again?

Categories: Uncategorized

I don’t believe the December jobs number

10/Jan/2014 Comments off

In general, U.S. government data are released before they’re ready.

If things are going to be reported as facts, we shouldn’t ignore evidence which suggests they are likely to not be facts. We know that when it comes to the BLS’ headline payroll number that the expected absolute gap between first print and third print is potentially huge. Hell, the gap is still big between the third print and the yearly “benchmark'” revisions.

The early payroll and GDP numbers don’t mean much to me unless there is reason to believe they’ll affect the Fed’s behavior.

This 74 thousand number means almost nothing to me.

Categories: Uncategorized

Markets dig the taper

24/Dec/2013 Comments off

I haven’t yet read what others have to say on the subject, but it seems to me that markets like the taper.

The taper shows that the Fed can produce a ‘QE like’ effect on expectations, by simply speaking. The Fed’s actions seem to have slightly boosted the outlook, though NGDP will probably still grow around 4% to 5% per year in the near future. This is to say, the Fed has still not given enough stimulus to hasten the output gap’s closing. Still, they’ve taken a step toward crafting QE-free policy, which is the biggest news in a while. Essentially the Fed needs to find the will to voice forward guidance in a way that offsets  the exceptional effect that a ‘normalizing’ monetary base would stir up.

NGDP graph

Any stimulus they can give is welcome. The media focus on the GDP number, but the GDI  number from last week wasn’t especially strong, only up 4.5% from last year. This is a respectable number under the post recession regime, but also nothing particularly encouraging.

BTW, Merry Christmas.

Links

1/Nov/2013 Comments off

Here are some links:

1. Bloomberg News’ Stephen Carter wants to disperse the Washington DC office infrastructure across the country, to break the group think and hobble cronyism. This is a good idea. I’ve long thought we should move the capital to Kansas City, or otherwise in middle of the country. Scattering the various central offices makes even more sense though, both from a political economy perspective, and as a way of more equitably sharing the loot. It is unfair that Virginia and Maryland benefit so much from ‘Mordor on the Potomac’. Of course if we get into what’s fair and unfair…someone might notice that Vermont has as many senators as Texas and that wouldn’t be good.

2. Lars Svensson on a Swedish economics tv program. No subtitles, so probably only of interest to that 20% of my readership which speaks some version of Scandinavian. They get to the Riksbank only briefly (around 21 minutes in), which is a shame as that’s when it gets good. Highlight of the program: “min inställning är att penningpolitiken ska följa riksbankslagen ” Amen. At another point, Svensson lays down the law, calmly explaining that even small countries can steer their own nominal ships so long as they have a flexible exchange rate.

3. The Dollar Survives Ted Cruz.  A post by Christopher Mahoney at Capitalism and Freedom. He points out that, no the dollar isn’t about to lose its special status. Rates are still ultra low next to near-term NGDP forecasts, and as this post points out, trending downward since September, despite the shutdown.

4. Huffington Post 15 Ways The United States is the best at being the worst.  I share this just to give my readers a chance to hone their bullshit smelling skills. America certainly has a lot of problems, and probably not as bright a future as say Australia or Canada (what I’d do to live in Toronto), but this post is misleading. Its filled with faulty premises and statistical shenanigans. See if you can spot them.

Categories: links

Fama’s ideas

19/Oct/2013 Comments off

Check out this 2010 paper: My Life in Finance by Eugene Fama.   It’s a good overview of everything Fama’s done, written by the man himself. Basically, its a reading list for me for the next year.

The paper makes me realize how little I know about finance as opposed to the related field of international macro, where I can always fall back on MV=PY, AS/AD, EMH when things get murky, and come out with sounder conclusions than those stuck with ‘interest rates’ -> ‘change in rates of growth in real variables’ -> ‘inflation’ paradigm.

One bit which caught my eye was Fama’s pointing out that finance has known about ‘fat tails’ for 50 years. I think all this Nassim Taleb ranting and raving about “Gaussian” this and “Platonic” that is a bit over done (to say nothing of his war on Dawkins and Pinker). You can use GMM to fit credit models to macro data with nasty residuals. Then the issue of fat tails comes down to how imaginative you can be when feeding a stress scenario into said model.  This is just what the BOE and Fed are doing these days, so hypothetically we’ve got the bailout issue under reasonable control.

I’m rambling now, so I’ll close by again urging you to read the Fama overview paper: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1553244  (click the ‘Download this paper’ button in the middle left)

 

Categories: EMH

Long live the shutdown

3/Oct/2013 Comments off

The downside of the partial government shutdown in America is that you can’t get permission to do a number of activities. For example, if you wanted to start a brewery, and did so without the right say-so, men will come to your house, shoot your dog, throw you in a truck and take you away to be locked in a cage. Right now you can’t get that permission slip for love nor money, but I suspect the downside still holds, so no new breweries.

Besides the regulatory freeze, I can’t see too many other downsides to the shutdown. My hope is that the government will stay ‘closed’ as long as possible. Order will emerge.

With enough overnight cuts and asset sales, they should be able to dodge default. The emergency oil reserves should yield cash right away. Same with the gold at the New York Fed and Fort Knox. Why can’t the treasury sell the national parks to Hollywood types who’d hand the properties over to the Sierra Club? Yellow Stone and Glacier National Park have got to be worth a few billion and I’d think rich leftists could better manage them. Oil leases come to mind for medium term cash. The Fed can of course smooth out the demandside effect of spending cuts through the usual threats, just as it did with the fiscal cliff.

Although the first BLS payroll employment print is next to useless, I’d still like to see it. Hence, I hope the BLS will soon have a bake sale. From what I hear there are plenty of surplus staff to man such entrepreneurial endeavors. If they’d add a paypal button to their website, I’ll gladly donate.

Great theater.

Categories: rant, troll, Uncategorized

Simulating NGDPLT with measurement error

23/Sep/2013 1 comment

I don’t think measurement error is much of a hurdle to effective NGDP level targeting. It seems obvious to me, but the ‘moving target’ critique is a common criticism. We could argue about the logic all day, the only way to put it to bed is to run some simulations and see if measurement error really is such a big deal.

I’ve taken a first stab at this.

I’ve found that NGDP stability under a level targeting policy rule is not much affected by the measurement error variance, when using a zero mean normal density to model the error. My simulations also suggest that price level targeting is a poor substitute for NGDPLT, if you care about NGDP stability.

First I’ll tell you how the simulation is set up, and then I’ll show you some histograms which summarize the ‘potential histories’ the simulation made.

How the simulation works

At the simulation’s heart is a factor augmented VAR. You could just as well use a New Keynesian ‘three equation’ model, or whatever you like. All that is important is that you have a model which describes NGDP, prices and monetary policy. NGDP and prices should move together when policy is eased or tightened.

The VAR has six lags on the vector:

D_t = \begin{bmatrix}  \Delta NGDP_{t-1} \\[0.3em]  \Delta P_{t-1}\\[0.3em]  \Delta Score.1_t  \end{bmatrix}

Where NGDP is the log of the average of NGDP and NGDI, P is the log of the personal consumption expenditures index less food and energy, and Score.1 is the first principal component from the following: S&P 100, S&P 500, the dollar index, five-year yield, five-year TIPS spread, three-month copper futures, front-month WTI futures, and the spread between the five-year yield and the 12-month yield.

I picked the number of lags based on what seemed to give the best response to Score.1 shocks. This model seems to be pretty good at making Great Recessions. Here is what happens when I dump a big negative shock into the Score.1_t equation and solve forward:

a shock to the model

In the first step a measurement error (\epsilon_t ) is drawn from a random number generator.

\epsilon_t \sim N(0,\sigma^{\epsilon})

This \sigma^\epsilon is the key to the whole thing. It measures how volatile NGDP measurement error is.
I thought it’d be reasonable for measurement error to decay with time. So after \epsilon_t is pulled, it is loaded into a vector of earlier measurement error draws (in the first quarter of the simulation I made seed values for these). In each new run of the simulation, the measurement errors are moved back a spot in the vector and shrunken by half. The four latest measurement errors are kept, after four quarters I assume NGDP is fully visible.

E_t = \begin{bmatrix}  .125\epsilon_{t-3} \\[0.3em]  .25\epsilon_{t-2} \\[0.3em]  .5\epsilon_{t-1} \\[0.3em]  \epsilon_{t}   \end{bmatrix}

These values are added to the four latest NGDP levels to make the Fed’s life hard. The Fed tends to see more or less the true level on recent NGDP, but the error could be enough to meaningfully change the recent growth trend, leading to a policy misstep.

The Fed uses the FAVAR to forecast NGDP a year ahead, but using the four mismeasured NGDP lags. Also, because it is unfair to give the Fed the ‘true economy model’ represented by the FAVAR, I also add a random forecast error (\epsilon_t^F ) to the NGDP forecast before the Fed ‘sees’ it.

\epsilon_t^F \sim N(0, \sigma^F)

The Fed sets monetary policy using this NGDP forecast, which might be a good forecast, or might not, depending on the total effects of the forecast and measurement error in a given quarter. The Fed compares the forecasted level of NGDP with target for that quarter. The target is given by a 4% yearly trend line from 2013Q2 NGDP forward 30 quarters. If The forecast is above this line, the Fed dumps a tiny negative shock into the score.1 equation of its FAVAR and runs its forecast again (with the same forecast error and NGDP measurement errors as in the first run), checking if the forecast is now within rounding error of the target. It repeats this process, making the ‘shock’ to financial markets a little bigger each iteration, until it finds the financial shock which puts policy right on target. If the first forecast was below target, the process works the same, except the Fed looks for the right sized upside shock to bring up expected growth to target.

So score.1 is the policy instrument. I think this is a useful way to model monetary policy, because it potentially includes everything the Fed can do to affect expectations: threats, promises, shows of cluelessness, QE and interest rate changes.

The Fed’s financial shock is dumped into the true FAVAR and solved forward using the true NGDP values. Random draws from the empirical residuals for each equation in the model are also added. The output from this single quarter solve is treated as new data D_{t+1} and added to the matrix which stores the D_t vectors.

The Fed has now done one quarter’s worth of monetary policy. Next it begins again, in the new quarter, with a new NGDP measurement error, making a new year-ahead forecast, being befuddled with a new random forecast measurement error, and finding a new upward or downward push to financial markets which it thinks will lead to on target NGDP growth. This process is done for 30 quarters. After 30 quarters the simulation ends.

Some graphs

I ran this simulation under a few different parameter values for \sigma^\epsilon , keeping \sigma^F=0.001 (the standard deviation of forecast error). To see how it might stack up against the alternative of price level targeting (which I thought would be the most charitable alternative), I ran a slight variation of the simulation using a 2% per year level target for P_t. This simulation was more or less the same as that outlined above, but instead of targeting NGDP, the Fed makes adjustments to hit the price level target. NGDP measurement error is still in the PLT simulation, so insofar as NGDP is useful for forecasting prices, the measurement error will lead that forecast astray.

Here are two sample runs of the NGDPLT rule and the PLT rule with \sigma^\epsilon = .005 (click to see)

A run of NGDPLT

A run of PLT

Here are the results of those simulations (batches of 500 runs) in histogram format. The variable shown in the histograms is the correlation coefficient of log NGDP and a linear sequence (1,2,…30). If the NGDP growth rate were the same every quarter in the 30 quarter simulation (perfect stability), this correlation coefficient would be 1.0 (exponentials become linear in log format). Using this measure makes it fair to compare NGDP stability in the NGDPLT regime and the PLT regime because it doesn’t force PLT to follow a particular NGDP level path, it just evaluates how steady the growth rate is. If the 2% PLT led to perfect 5% NGDP growth, the correlation would be 1.0, just as it would be under perfect 4% NGDP growth.

\sigma^\epsilon = .001
historgram_point001

\sigma^\epsilon = .002

historgram_point002

\sigma^\epsilon = .01

historgram_point005

You can see that on each measurement error setting, NGDPLT leads to much higher odds of NGDP stability. Not perfectly stable, but there is a tight central tendency on NGDPLT, whereas PLT is widely spread. Interestingly, as NGDP measurement error goes up, the PLT gives worse and worse results. This is because, in this tiny model, NGDP is an important predictor of the price level. As you lose information about the recent NGDP trend, you lose information about future prices, at least in this set up.

In case anyone would think it unfair that I set \sigma^F=.001 for both the price level forecast and the NGDP forecast, I ran another batch of simulations, with forecast error turned off for the price level case and \sigma^\epsilon =0.001. I then cranked \sigma^\epsilon =0.01 for the NGDPLT case. Here is the result:

histogrammixed

Maybe you think NGDP is actually not that useful for forecasting prices. In that case let’s drop that ‘control’ experiment and just look at how NGDP stability changes with increasing measurement error. Here are how the different measurement error settings look for only NGDPLT. I gave up trying to get Greek letters in the legend, the “low” to “Highest” stand for \sigma^\epsilon = .001, .002, .005, .01

histogram_all_ngdplt

I didn’t do anything tricky to get these results. I went straight from 1. Finding a VAR that looked like it had reasonable responses to financial shocks to 2. building the simulation functions around that VAR. These are the first outputs I got.

The simulation is hardly perfect. ‘Macroeconomic models are toys, sometimes toys are useful.’ In reality, if the Fed announced an NGDP level target, they wouldn’t need to manipulate financial market’s like I’ve done here. The Chuck Norris effect would do most of the work, and the Fed might only need to make small adjustments to monetary base growth here and there to maintain credibility. However, my approach gets at the underlying logic. The Fed has a communication tool, NGDP and prices respond to that tool.

I could try a few different types of models and see if the results are sensitive to my choice to use the FAVAR. If you like, suggest a model specification in the comments (link to a paper maybe) and I’ll consider rerunning the simulation with a different model. I’m confident something like this result will show up. If you want steady NGDP, what’s the best way to get it? To try to stabilize NGDP? or to…do something else? I realize that life is full of counterintuitive ways of getting things done, so if there is some trick to getting stable NGDP (target the price of beans) then let’s do that, but let’s do that because we want stable NGDP.

Follow

Get every new post delivered to your Inbox.