Links

1/Nov/2013 Comments off

Here are some links:

1. Bloomberg News’ Stephen Carter wants to disperse the Washington DC office infrastructure across the country, to break the group think and hobble cronyism. This is a good idea. I’ve long thought we should move the capital to Kansas City, or otherwise in middle of the country. Scattering the various central offices makes even more sense though, both from a political economy perspective, and as a way of more equitably sharing the loot. It is unfair that Virginia and Maryland benefit so much from ‘Mordor on the Potomac’. Of course if we get into what’s fair and unfair…someone might notice that Vermont has as many senators as Texas and that wouldn’t be good.

2. Lars Svensson on a Swedish economics tv program. No subtitles, so probably only of interest to that 20% of my readership which speaks some version of Scandinavian. They get to the Riksbank only briefly (around 21 minutes in), which is a shame as that’s when it gets good. Highlight of the program: “min inställning är att penningpolitiken ska följa riksbankslagen ” Amen. At another point, Svensson lays down the law, calmly explaining that even small countries can steer their own nominal ships so long as they have a flexible exchange rate.

3. The Dollar Survives Ted Cruz.  A post by Christopher Mahoney at Capitalism and Freedom. He points out that, no the dollar isn’t about to lose its special status. Rates are still ultra low next to near-term NGDP forecasts, and as this post points out, trending downward since September, despite the shutdown.

4. Huffington Post 15 Ways The United States is the best at being the worst.  I share this just to give my readers a chance to hone their bullshit smelling skills. America certainly has a lot of problems, and probably not as bright a future as say Australia or Canada (what I’d do to live in Toronto), but this post is misleading. Its filled with faulty premises and statistical shenanigans. See if you can spot them.

Categories: links

Fama’s ideas

19/Oct/2013 Comments off

Check out this 2010 paper: My Life in Finance by Eugene Fama.   It’s a good overview of everything Fama’s done, written by the man himself. Basically, its a reading list for me for the next year.

The paper makes me realize how little I know about finance as opposed to the related field of international macro, where I can always fall back on MV=PY, AS/AD, EMH when things get murky, and come out with sounder conclusions than those stuck with ‘interest rates’ -> ‘change in rates of growth in real variables’ -> ‘inflation’ paradigm.

One bit which caught my eye was Fama’s pointing out that finance has known about ‘fat tails’ for 50 years. I think all this Nassim Taleb ranting and raving about “Gaussian” this and “Platonic” that is a bit over done (to say nothing of his war on Dawkins and Pinker). You can use GLM to fit credit models to macro data with nasty residuals. Then the issue of fat tails comes down to how imaginative you can be when feeding a stress scenario into said model.  This is just what the BOE and Fed are doing these days, so hypothetically we’ve got the bailout issue under reasonable control.

I’m rambling now, so I’ll close by again urging you to read the Fama overview paper: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1553244  (click the ‘Download this paper’ button in the middle left)

 

Categories: EMH

Long live the shutdown

3/Oct/2013 Comments off

The downside of the partial government shutdown in America is that you can’t get permission to do a number of activities. For example, if you wanted to start a brewery, and did so without the right say-so, men will come to your house, shoot your dog, throw you in a truck and take you away to be locked in a cage. Right now you can’t get that permission slip for love nor money, but I suspect the downside still holds, so no new breweries.

Besides the regulatory freeze, I can’t see too many other downsides to the shutdown. My hope is that the government will stay ‘closed’ as long as possible. Order will emerge.

With enough overnight cuts and asset sales, they should be able to dodge default. The emergency oil reserves should yield cash right away. Same with the gold at the New York Fed and Fort Knox. Why can’t the treasury sell the national parks to Hollywood types who’d hand the properties over to the Sierra Club? Yellow Stone and Glacier National Park have got to be worth a few billion and I’d think rich leftists could better manage them. Oil leases come to mind for medium term cash. The Fed can of course smooth out the demandside effect of spending cuts through the usual threats, just as it did with the fiscal cliff.

Although the first BLS payroll employment print is next to useless, I’d still like to see it. Hence, I hope the BLS will soon have a bake sale. From what I hear there are plenty of surplus staff to man such entrepreneurial endeavors. If they’d add a paypal button to their website, I’ll gladly donate.

Great theater.

Categories: rant, troll, Uncategorized

Simulating NGDPLT with measurement error

23/Sep/2013 1 comment

I don’t think measurement error is much of a hurdle to effective NGDP level targeting. It seems obvious to me, but the ‘moving target’ critique is a common criticism. We could argue about the logic all day, the only way to put it to bed is to run some simulations and see if measurement error really is such a big deal.

I’ve taken a first stab at this.

I’ve found that NGDP stability under a level targeting policy rule is not much affected by the measurement error variance, when using a zero mean normal density to model the error. My simulations also suggest that price level targeting is a poor substitute for NGDPLT, if you care about NGDP stability.

First I’ll tell you how the simulation is set up, and then I’ll show you some histograms which summarize the ‘potential histories’ the simulation made.

How the simulation works

At the simulation’s heart is a factor augmented VAR. You could just as well use a New Keynesian ‘three equation’ model, or whatever you like. All that is important is that you have a model which describes NGDP, prices and monetary policy. NGDP and prices should move together when policy is eased or tightened.

The VAR has six lags on the vector:

D_t = \begin{bmatrix}  \Delta NGDP_{t-1} \\[0.3em]  \Delta P_{t-1}\\[0.3em]  \Delta Score.1_t  \end{bmatrix}

Where NGDP is the log of the average of NGDP and NGDI, P is the log of the personal consumption expenditures index less food and energy, and Score.1 is the first principal component from the following: S&P 100, S&P 500, the dollar index, five-year yield, five-year TIPS spread, three-month copper futures, front-month WTI futures, and the spread between the five-year yield and the 12-month yield.

I picked the number of lags based on what seemed to give the best response to Score.1 shocks. This model seems to be pretty good at making Great Recessions. Here is what happens when I dump a big negative shock into the Score.1_t equation and solve forward:

a shock to the model

In the first step a measurement error (\epsilon_t ) is drawn from a random number generator.

\epsilon_t \sim N(0,\sigma^{\epsilon})

This \sigma^\epsilon is the key to the whole thing. It measures how volatile NGDP measurement error is.
I thought it’d be reasonable for measurement error to decay with time. So after \epsilon_t is pulled, it is loaded into a vector of earlier measurement error draws (in the first quarter of the simulation I made seed values for these). In each new run of the simulation, the measurement errors are moved back a spot in the vector and shrunken by half. The four latest measurement errors are kept, after four quarters I assume NGDP is fully visible.

E_t = \begin{bmatrix}  .125\epsilon_{t-3} \\[0.3em]  .25\epsilon_{t-2} \\[0.3em]  .5\epsilon_{t-1} \\[0.3em]  \epsilon_{t}   \end{bmatrix}

These values are added to the four latest NGDP levels to make the Fed’s life hard. The Fed tends to see more or less the true level on recent NGDP, but the error could be enough to meaningfully change the recent growth trend, leading to a policy misstep.

The Fed uses the FAVAR to forecast NGDP a year ahead, but using the four mismeasured NGDP lags. Also, because it is unfair to give the Fed the ‘true economy model’ represented by the FAVAR, I also add a random forecast error (\epsilon_t^F ) to the NGDP forecast before the Fed ‘sees’ it.

\epsilon_t^F \sim N(0, \sigma^F)

The Fed sets monetary policy using this NGDP forecast, which might be a good forecast, or might not, depending on the total effects of the forecast and measurement error in a given quarter. The Fed compares the forecasted level of NGDP with target for that quarter. The target is given by a 4% yearly trend line from 2013Q2 NGDP forward 30 quarters. If The forecast is above this line, the Fed dumps a tiny negative shock into the score.1 equation of its FAVAR and runs its forecast again (with the same forecast error and NGDP measurement errors as in the first run), checking if the forecast is now within rounding error of the target. It repeats this process, making the ‘shock’ to financial markets a little bigger each iteration, until it finds the financial shock which puts policy right on target. If the first forecast was below target, the process works the same, except the Fed looks for the right sized upside shock to bring up expected growth to target.

So score.1 is the policy instrument. I think this is a useful way to model monetary policy, because it potentially includes everything the Fed can do to affect expectations: threats, promises, shows of cluelessness, QE and interest rate changes.

The Fed’s financial shock is dumped into the true FAVAR and solved forward using the true NGDP values. Random draws from the empirical residuals for each equation in the model are also added. The output from this single quarter solve is treated as new data D_{t+1} and added to the matrix which stores the D_t vectors.

The Fed has now done one quarter’s worth of monetary policy. Next it begins again, in the new quarter, with a new NGDP measurement error, making a new year-ahead forecast, being befuddled with a new random forecast measurement error, and finding a new upward or downward push to financial markets which it thinks will lead to on target NGDP growth. This process is done for 30 quarters. After 30 quarters the simulation ends.

Some graphs

I ran this simulation under a few different parameter values for \sigma^\epsilon , keeping \sigma^F=0.001 (the standard deviation of forecast error). To see how it might stack up against the alternative of price level targeting (which I thought would be the most charitable alternative), I ran a slight variation of the simulation using a 2% per year level target for P_t. This simulation was more or less the same as that outlined above, but instead of targeting NGDP, the Fed makes adjustments to hit the price level target. NGDP measurement error is still in the PLT simulation, so insofar as NGDP is useful for forecasting prices, the measurement error will lead that forecast astray.

Here are two sample runs of the NGDPLT rule and the PLT rule with \sigma^\epsilon = .005 (click to see)

A run of NGDPLT

A run of PLT

Here are the results of those simulations (batches of 500 runs) in histogram format. The variable shown in the histograms is the correlation coefficient of log NGDP and a linear sequence (1,2,…30). If the NGDP growth rate were the same every quarter in the 30 quarter simulation (perfect stability), this correlation coefficient would be 1.0 (exponentials become linear in log format). Using this measure makes it fair to compare NGDP stability in the NGDPLT regime and the PLT regime because it doesn’t force PLT to follow a particular NGDP level path, it just evaluates how steady the growth rate is. If the 2% PLT led to perfect 5% NGDP growth, the correlation would be 1.0, just as it would be under perfect 4% NGDP growth.

\sigma^\epsilon = .001
historgram_point001

\sigma^\epsilon = .002

historgram_point002

\sigma^\epsilon = .01

historgram_point005

You can see that on each measurement error setting, NGDPLT leads to much higher odds of NGDP stability. Not perfectly stable, but there is a tight central tendency on NGDPLT, whereas PLT is widely spread. Interestingly, as NGDP measurement error goes up, the PLT gives worse and worse results. This is because, in this tiny model, NGDP is an important predictor of the price level. As you lose information about the recent NGDP trend, you lose information about future prices, at least in this set up.

In case anyone would think it unfair that I set \sigma^F=.001 for both the price level forecast and the NGDP forecast, I ran another batch of simulations, with forecast error turned off for the price level case and \sigma^\epsilon =0.001. I then cranked \sigma^\epsilon =0.01 for the NGDPLT case. Here is the result:

histogrammixed

Maybe you think NGDP is actually not that useful for forecasting prices. In that case let’s drop that ‘control’ experiment and just look at how NGDP stability changes with increasing measurement error. Here are how the different measurement error settings look for only NGDPLT. I gave up trying to get Greek letters in the legend, the “low” to “Highest” stand for \sigma^\epsilon = .001, .002, .005, .01

histogram_all_ngdplt

I didn’t do anything tricky to get these results. I went straight from 1. Finding a VAR that looked like it had reasonable responses to financial shocks to 2. building the simulation functions around that VAR. These are the first outputs I got.

The simulation is hardly perfect. ‘Macroeconomic models are toys, sometimes toys are useful.’ In reality, if the Fed announced an NGDP level target, they wouldn’t need to manipulate financial market’s like I’ve done here. The Chuck Norris effect would do most of the work, and the Fed might only need to make small adjustments to monetary base growth here and there to maintain credibility. However, my approach gets at the underlying logic. The Fed has a communication tool, NGDP and prices respond to that tool.

I could try a few different types of models and see if the results are sensitive to my choice to use the FAVAR. If you like, suggest a model specification in the comments (link to a paper maybe) and I’ll consider rerunning the simulation with a different model. I’m confident something like this result will show up. If you want steady NGDP, what’s the best way to get it? To try to stabilize NGDP? or to…do something else? I realize that life is full of counterintuitive ways of getting things done, so if there is some trick to getting stable NGDP (target the price of beans) then let’s do that, but let’s do that because we want stable NGDP.

They keep trying to find bubbles

There’s an article on Bloomberg worth nitpicking: Asset Bubbles Found by Finnish Economist Inspired by Grandfather.

Here is a quote to give you the gist of what Dr. Taipalus (the economist the article is about, it’s a bit of a personality piece) has done:

Feed in dividend yields and stock indexes, and Taipalus’s indicator signals every major U.S. stock-price bubble since 1871. Input rent indexes and house prices, and it signals when increases in the cost of homes are becoming unhinged.

This is quite a statement, and as I’m an unrepentant market fundamentalist, I take issue with it. I don’t doubt one could build an indicator which looks like it predicts past stock and housing market downswings. What I do doubt is said indicator’s ability to offer any useful guidance outside the sample its inputs were calibrated to fit. There are a few thousand people working in hedge funds, people with massive brains, grinding away with data mining, Bayesian methods and good old fashioned logic and research. These people live to find ways to foretell a drop in stocks. I guarantee you they’ve baked all the information Taipalus is dealing with into equity prices already.

A George Soros quote comes to mind, it goes something like this: ‘Imagine I had a model which forecasted stock prices. As soon as that model went public, prices would adapt to it and it would stop working’. He was talking about his ‘Theory of Reflexivity’, the particulars of which I’ve forgotten, but I often think of the quote when I hear people claim to have some rule for predicting stocks.

As soon as traders find a market inefficiency, the inefficiency ceases to exist. Otherwise anyone could get rich by exploiting the inefficiency. It’s a waste of time to keep going in circles with the EHM like this. Even if Taipalus got rich trading on her indicator (the only way I’d believe it really ‘worked’), publishing the indicator’s methodology makes it useless.

Should we be surprised the blaggards at the European Central Bank [who supported the indicator's development] keep getting things wrong?

No Summers!

15/Sep/2013 Comments off
Categories: Uncategorized

America could stand higher oil taxes

8/Sep/2013 Comments off

I’d like to share some things I’ve learned about arctic ice cover and then share an idea for an oil tax system I’ve thought up.

Some of you saw headlines like this one from the reboutable Daily Mail:  Record return of Arctic ice cap as it grows by 60% in a yearThe article could lead one to believe that ice cover is generally widening in the arctic. This fits my bias, as I am instinctively against the self righteous progressive types and true believers who fume about the warming earth.

Still, I’m not entitled to my own facts. Apparently the recent bounce in arctic sea ice cover is not clear evidence of a cooling arctic, at least not yet. I hesitantly shared the above Daily Mail link on a social medium and was freed from my ignorance by an old classmate. Here is a link you should look at: Arctic Sea Ice News and Analysis and in particular this plot:

ice

The graph shows that 2013 ice coverage is still low, just that 2012 was a particularly bad year. Total ice cover is still something like 25% less than it was in 1980. How bad is the overall situation? Is man really to blame for the warming? Robert Pindyck was on Russ Robert’s podcast some weeks back. He makes a good case that humans are driving climate change, and that it is a big enough risk that politicians should try harder come together on a world carbon tax framework. I don’t know what to make of that, but you should listen to the podcast on your next hour-long car or train trip.

I’m not too worried about global warming. Not because I don’t think it isn’t ‘real’ or caused by humans (not qualified to say), but because I’ve heard remarkable and convincing claims about what can be done with cattle. If man’s CO2 output really is to blame, we can undo that damage cheaply. Apparently we can suck huge amounts of carbon from the air and put into the earth by simply managing huge herds of  hoofed, grass-eating animals. This is along Freeman Dyson’s line of thinking, but instead of growing trees, we improve the productivity of grassland. This is a topic in and of itself, but I’d point you to two lectures by a former Rhodesian named Alan Savory: link to the short one and a link of the longer one. The gist is that rather than having your hoofed animals wander about, eating the choice grasses, keep them bunched together, and moving often. I can’t recommend the Savory lectures enough. I do more than an average amount of reading on grass fed cattle, and I’ve seen several ranchers claim to have added measurable amounts of top soil to their lands in a few years time, that topsoil is sunken carbon.

Now to oil and my tax scheme.

American oil production is still rising briskly. At this point, going back to the old peak, and maybe a bit beyond, seems plausible.

oilproduction

If you dig into the EIA data, you’ll see that the rising output is driven by gains in many states. It’s not only North Dakota. This is great news in the near term and I hope U.S. oil production continues to rise. It also seems obvious to me that the Keystone pipeline should be built (though that wouldn’t be so great for Obama’s railroad-owning Plutocrat friends). However, the reasonable, conservative response to the rise in oil output (which is not isolated to the U.S.) is a well-thought-out, revenue-neutral tax. I would hold this view regardless of climate change, as it is a realistic way to get better monetary and tax policy.

First a bit on why it is wise to tax oil, regardless of domestic production trends.

It is generally a bad idea to tax something just because we can show on paper that it would dull the effects of a market failure. From a certain low-complexity perspective, we should probably tax wasteful, externality-ridden  activities like formal education, the Olympics, take-home alcohol, and ‘news’ supplying media companies. However, that list would quickly become long and we’d soon be living in a Permit Raj. Not good. Also, new laws and taxes open the door to Baptist and Bootlegger dynamics, and risk sprouting groups which will fight future reforms. Still, we may wish to trade a bit of regulatory simplicity to iron out some of the bigger market wrinkles with special taxes.

Oil demand is fairly price inelastic in the medium run. This makes oil taxation a great way to suck resources out of the private sector, without damaging the economy much. You may object to said sucking of resources, but the sad fact is that the state is going to find a way to take 40% of GDP. It is best if said taxation is done so as to minimize the unproductive response by the taxed. If the state mandates the price of gasoline be $6 per gallon, people answer by using less gasoline, but there are only so many ways to do that in a one to five year time frame. As a result, this tax ends up looking a lot like a (possibly regressive) consumption tax. This means the tax won’t discourage saving or earning a higher income in the same way a progressive income tax would. Another bonus: because you can put the tax in a “green” box, the progressives can support a sensible flat consumption tax, but without having to call it that.

If our only goal were to raise government revenue while minimizing harm to the economy, a higher oil tax would tend to work less well over time. Americans can lower their petroleum consumption, as the last half-decade has shown.

gasoline sales

Efficient tax revenue is only one reason for taxing oil. Another is to make people pay something approaching the full costs of their oil consumption. You’ve heard it before. U.S. oil demand boosts the global oil price, sending money to hostile foreign governments like Iran, Saudi Arabia and Norway. The tireless work of the U.S. Navy, and overall U.S. middle east policy also keep oil prices “artificially” low by protecting foreign owned oil firms in that hopeless region. In 1991 the U.S. shielded Saudi Arabia from Iraq and got Kuwait back online, for arguably little net benefit to U.S. citizens. In 2003 the U.S. got Iraq back into the oil markets at enormous cost, with most of the benefits accruing to whichever Iraqi clans best rob the Iraqi state oil company. Policing the seas is desirable and maybe even the 1991 war was marginally wise (at least they left the running of Iraq to the professionals). But these policies keep the oil price cheaper than it otherwise would be, a de facto subsidy. Higher oil taxes would make actual oil consumption closer to the social optimum which would prevail if these outside costs were built into the price by the market.

Now to the big reason.

An underappreciated downside of America’s high ‘dependence’ on oil is the effect which oil price swings have on the quality of monetary policy decisions.  My worry is that oil prices will fall in the medium term, undoing a lot of the ‘recalibration’ that American households have done in the wake of the 2006-onward oil price rise. We just won’t know the full scope of the fracking bonanza until production peaks. This could be in five years or 30. If oil become cheap for a generation, then no big deal, let’s see if massive herds of grass-eating cows can poop out and trample down the extra carbon. If that model can be scaled up enough, then go ahead with the wasteful cars. However, if fracking manages to send the oil price down to $50 for a few years, before shooting back up, then you can bet the Fed will screw up the monetary policy response and be lauded for doing so.

I think of U.S. monetary policy as following a loose oil price peg. If oil prices go up, the sophists howl that the Fed is debasing the dollar, supplyside “rationation” (that’s rationing inflation) is confused for true demandside inflation and policy is tightened. The Fed has shown again and again that it will run the economy through a downturn if that’s the price it has to pay to slip blame for “pain at the pump”. Reasonable government policy would aim to gradually lower the ratio of nominal oil consumption to NGDP. For this reason, I would support higher oil taxes, regardless of the pretexts the tax is peddled under.

Here is how we might tax finished oil products: The IRS could level target the price of retail gasoline and diesel. Adjust the Federal per gallon tax once a year so that prices rise about 3% over the long term (that is, just under twice the rate of overall inflation under 4% NGDP growth). The once a year adjustment keeps things simple for retailers while also ensuring that consumers know price dips will be only temporary. This also gives consumers a bit of a cushion against price increases, as the tax could fall to $0 per gallon when the average retail price moves above the level target for that year. This is a meddlesome scheme, ideally it would be sold to the Democrats in exchange for a supplyside, simplifying reform.

This tax could be made politically palatable by pairing it with a phaseout of electric car subsidies and making it roughly revenue neutral by making offsetting cuts to the capital gains and corporate income taxes. These are destructive taxes which punish virtuous behavior like investment and bias the tax code in favor of big firms who can filter their profits through Ireland. We have to tax something, why tax work and savings when we could tax a particularly wasteful and destructive form of consumption? We’d get better monetary policy and maybe even better tax policy.

Categories: Monetary Policy, Oil, Taxes
Follow

Get every new post delivered to your Inbox.