The "Economists Monkeying With The CPI" Debate
A popular belief in hard money internet commentary is that economists are conspiring to lower reported inflation. The advantage of this is to mislead voters and to reduce cost-of-living adjustments paid by the government. Admittedly, there are cases in emerging markets where there is widespread skepticism about government inflation statistics, and the government inflation numbers do not appear to align with market data. As such, I label this as a “misunderstanding,” as I accept that CPI numbers probably have been fudged somewhere. However, the usual case in developed countries is that statistical agencies are transparent about their methodologies, and the complaints are bad faith misinterpretations of the methodologies.
I started in finance in the late 1990s, and there were various debates about inflation data which were argued in good faith. Since then, various points have been recycled in market and economic commentary, but any attempt to argue in good faith has been dropped. One formulation that remains popular is “if they measured CPI the same way they did in the 1970s, it would be X% higher.” This is repeated so often that the people making the statement often do not bother offering a source for it.
In the United States, I see three primary roots of the theory that economists are manipulating CPI inflation dramatically lower, which mainly date to the 1990s.
- The first revolved around the Boskin Commission, which argued that CPI was overstating inflation and pushed for calculation changes.
- The second was the use of quality adjustments. Section 1.2 gave a basic introduction to the notion of quality adjustments. The concern in the 1990s involved the rapid deflation of computer prices. For both areas, there was a legitimate debate, and it is entirely possible that critics were at least somewhat correct.
- The third was the changes in the treatment of shelter in the CPI (house prices versus rents), which changed in 1983.
I discuss the first two points in this section, I discuss shelter costs in Section 4.3, since it is more complex. To summarise the issue, the Bureau of Labor Statistics (BLS) changed the calculation for shelter, removing house prices and replacing them with rents. (A Council of Economic Advisors article in the references discusses this.) Even if one disagrees with the use of rents instead of house prices, the reason why this change lowered reported CPI was that there was a generational bull market in house prices since 1983 (aided by the collapse in mortgage rates). Unless one believes that house prices can out-strip rents forever, the change will have a mixed effect on CPI (as seen in the Financial Crisis, house prices can fall).
Statistical agencies in other countries have made similar changes over the years, and so the American arguments get picked up with local colour elsewhere.
The Boskin Commission
The Boskin Commission was a report by Michael J. Boskin, E. Dulberger, R. Gordon, Z. Griliches, and D. Jorgenson commissioned by the American Senate’s Finance Committee in 1996. The report argued that CPI calculations overstated inflation by 1.1% per year. The report did not have a research budget, which meant that it was put together by surveying the literature, in which different authors had different bias estimates.
The report was controversial, particularly because of some of the analysis of fiscal policy that is not going to be addressed herein. (Long-term fiscal projections made in 1996 are not worth spending time on in 2023, other than for retrospectives of fiscal projection methodology.) Economists largely agree that the weighting system that was used in the CPI at the time represented an upper bound for an idealised price index. That said, the magnitude of the bias was uncertain. Beyond the weighting concerns, there might be things like the structural degradation of service over time that is not captured in the CPI (and thus the CPI is overstating the cost of living). Finally, there was considerable annoyance in some quarters towards anything that reduces the measured CPI. (“Boskinised” was a popular epithet in the soon-to-be launched American inflation-linked bond (TIPS) market.)
I do not see the proposition that there might be qualitative factors that lead to CPI being overstated as being too controversial; all we can do is try to find a way of capturing those qualitative factors. I will instead just look at the calculation bias dispute.
The Boskin report gave a point estimate of the bias at 1.1% per year, with a plausible range of 0.8%-1.6%. About half of the bias was the used of fixed consumption weights (“Laspeyres index”). The sources of the bias for the point estimate were broken down as:
-
0.15% was upper-level substitution. This is a substitution across consumption categories in response to price changes.
-
0.25% was lower-level substitution, which is substitution within a narrow category like “apples” (e.g., switch consumption to cheaper types of apples).
-
0.10% was the outlet substitution – for example, going to a discounter.
-
0.60% was due to new products and quality changes.
John S. Greenlees wrote “The BLS Response to the Boskin Commission Report” in 2006, which offered a view of how the Boskin Report fit in with existing methodology changes that were taking place at the BLS. Although the Boskin Report did push along changes, many were being considered before the report was commissioned. The General Accounting Office asked the Boskin Commission members for an update of their estimate of their bias in 2000, and they suggested that the changes made by the BLS lowered the bias by 0.3% (to 0.8% per year). Robert J. Gordon wrote a retrospective in 2006 (reference below), in which he argues that the value of new products and quality changes still biases the CPI too high by at least 1%.
The key point to note is that the changes made to the CPI only appear to have lowered reported CPI inflation by much less than 1% per year, which is far lower than the ludicrous estimates of the effects of methodology changes one can find on the internet. The BLS created some other experimental CPI indices, which can be used to gauge the bias. The figure above shows how much higher the official CPI is than one of the experimental indices – the chained CPI. The gap is not close to constant – it tends to be sensitive to energy prices (which even causes the chained CPI to be higher than the official CPI, such as in the Financial Crisis recession). The chained CPI relies on more accurate consumption weights – but they are not available in real time, so the series needs to be revised. The existence of these revisions explains why it cannot be used as the official CPI number, as it is not a good idea to contractually fix payments based on a number that will be changed in the future.
I see little value in dumping estimates of bias on the reader, particularly since readers are in different countries and methodologies may change yet again before the reader gets to this text. Instead, the key point to note is that we can look at things like the gap between the chained CPI and the official CPI to see that the official indices are biased higher than what would be seen as “better” methodologies. There might be biases in the other direction, but those tend to be harder to quantify. However, since improvements to the methodology tend to lower reported inflation, we end up with the conspiracy theory that economists are monkeying with the CPI calculations to benefit the government.
Computer Prices
The previous figure shows the annual percentage change in the GDP deflator for computers. (The price index used in GDP calculations is referred to as a deflator.) If you focus on the late 1990s – when the controversy got serious – we see that the deflator was falling over 20% per year. What this meant that if the nominal spending was unchanged, a 20% drop in the deflator would imply that real spending (i.e., inflation adjusted) grew by 20%. Although computer spending was not a huge weight of overall GDP, this represents a significant magnification.
From a user standpoint, it was not clear that prices were falling to that extent. Back when the debate was raging, I had been buying computers periodically for both myself and family members. Although the computer components were continuously changing, the price of a pre-configured computer that was capable not aimed at performance-seeking game players tended to be stuck around $1000 (Canadian).
The stability of the price for “good enough” computers was not a magical coincidence. In the tech industry, products aimed at retail are typically sold at target price points, and so the new generation of components tended to come up at the nominal price of the previous generation (which has its price cut as the new generation takes over).
The implication for the deflator was that the combination of roughly unchanged prices for the latest generation and large quantitative differences in the metrics used to measure quality (memory size, speed) implies a rapidly falling deflator.
European economists – particularly Germanic economists – were outraged by this situation. They argued that statisticians in the United States were cooking the books, flattering American GDP growth rates versus the European GDP growth rates, as the European GDP deflator was more sedate. Given that some Austrian economists were in fact of Austrian descent, this drama made its way into Austrian financial market newsletters as well as the fledgeling “economic bear website” discourse. Since statisticians referred to quality adjustments as “hedonic adjustments,” one encountered a lot of raging about “hedonics.” (Hedonic comes from the Greek word for pleasure, and quality adjustments were based on the “utility” of the changes, and “pleasure” was used as a synonym for “utility.”)
This argument has faded from view, mainly because the deflator has largely calmed down. There was also the reality that some of the European critics misunderstood how GDP calculations worked, and their estimates of the effect of quality changes on real GDP growth were overstated. As such, one no longer hears much about this effect in “mainstream” economic discussion. However, memories have not completely faded within the Austrian community, and so one still encounters random rants about “quality adjustment” to this day on the internet.
Although I am not incredibly sympathetic to popular Austrian economists, I do have sympathy for the argument that the deflation in the computer deflator was misleading (although the alleged effect on real GDP was a red herring). Other than for a handful of computing-intensive tasks (like high end video games), the processors of personal computers spend most of their time idle, while they wait to react to comparatively slow-moving keyboard and mouse inputs. I have been writing on computers since the early 1990s, and other than useful features like grammar checking, the overall experience has not changed that much (so far).
Readers interested in further details on the effect of computer deflation on growth rates, Paul Schreyer wrote an article discussing the mechanics of the calculations – full reference details given below. The article runs through the GDP calculations, and estimates bounds on the effect of the overall growth rate due to the deflation of computers. To summarise the arguments, although it was difficult to give an exact number on the effect of differing methodologies between countries, the maximum effect was not very large.
“I can’t eat an iPad”
The computer deflation controversy was echoed by a clumsy remark in 2011 by the then New York Fed President Bill Dudley. When asked when he last went grocery shopping during a public meeting in Queen’s New York, Dudley response included “Today you can buy an iPad 2 that costs the same as an iPad 1 that is twice as powerful. You have to look at the prices of all things.” (Referring to the Apple iPad™ tablet, where the iPad 2 was just released before the comment.) This then led to the response an audience response “I can’t eat an iPad.”
This comment has been repeated innumerable times on the internet, with the message that economists are covering up the cost of living by using the deflation in technology goods.
Concluding Remarks
Price indices are complicated concepts, and there is plenty of room for disagreement in how to calculate them. (In fact, some Austrian economists argue that there is no way to meaningfully calculate them.) What has happened is that historical disagreements have lived on in market commentary folklore, with the alleged effect of methodological changes vastly over-estimated.
References and Further Reading
-
“Housing Prices and Inflation,” by Jared Bernstein, Ernie Tedeschi, and Sarah Robinson. U.S. Presidential Council of Economic Advisors blog. https://www.whitehouse.gov/cea/written-materials/2021/09/09/housing-prices-and-inflation/
-
Greenlees, John S. "The BLS response to the Boskin Commission report." International Productivity Monitor 12 (2006): 23. URL: https://stats.bls.gov/pir/journal/gj10.pdf
-
Gordon, Robert J. "The Boskin Commission report: A retrospective one decade later." (2006). Available at: https://www.nber.org/system/files/working_papers/w12311/w12311.pdf
-
Schreyer, Paul. “Computer price indices and international growth and productivity comparisons.” Review of Income and Wealth 48.1 (2002): 15-31. A version is available at: https://www.oecd.org/sdd/productivity-stats/30669346.pdf
-
“iPad price remark gets Fed’s Dudley an earful,” Kristina Cooke, Reuters, March 11, 2011. URL: https://www.reuters.com/article/us-usa-fed-dudley-ipad-idUSTRE72A4AC20110311
More By This Author:
Inflation And The Labour MarketInflation Or Nominal GDP Targeting? Whatever.
Inflation Targeting In Practice
Note: This is an unedited draft section of my inflation primer manuscript. It is a chapter about misunderstandings/myths about the CPI. This section aims at claims that economists have fooled around ...
more