Wednesday, November 22, 2017

The Dominance of Peoria in the Processed Pumpkin Market

As I prepare for a season of pumpkin pie, pumpkin bread (made with cornmeal and pecans), pumpkin soup (especially nice wish a decent champagne) and perhaps a pumpkin ice cream pie (graham cracker crust, of course),  I have been mulling over why the area around Peoria, Illinois, so dominates the production of processed pumpkin.

The facts are clear enough. As the US Department of Agriculture points out (citations omitted): In 2016, farmers in the top 16 pumpkin-producing States harvested 1.1 billion pounds of pumpkins, implying about 1.4 billion pounds harvested altogether in the United States. Production increased 45 percent from 2015 largely due to a rebound in Illinois production. Illinois production, though highly variable, is six times the average of the other top eight pumpkin-producing States (Figure 2).
Production increased 45 percent from 2015 largely due to a rebound in Illinois production. Illinois production, though highly variable, is six times the average of the other top eight pumpkin-producing States.

Not only does Illinois produce more pumpkins, but a much larger share of pumpkins from this state end up being processed, rather than used fresh. The USDA reports:
Illinois harvests the largest share of processing pumpkin acres among all States—almost 80 percent. Michigan is next with a little over 10 percent. Other States harvest less than 5 percent processing pumpkins.

It's not really the entire state of Illinois, either, but mainly an area right around Peoria. The University of Illinois extension service writes: "Eighty percent of all the pumpkins produced commercially in the
U.S. are produced within a 90-mile radius of Peoria, Illinois. Most of those pumpkins are grown for processing into canned pumpkins. Ninety-five percent of the pumpkins processed in the United States are grown in Illinois. Morton, Illinois just 10 miles southeast of Peoria calls itself the `Pumpkin Capital of the World.'"

Why does this area have such dominance? Weather and soil are part of the advantage, but it seems unlikely that the area around Peoria is dramatically distinctive for those reasons alone. This also seems to be a case where an area got a head-start in a certain industry, established economies of scale and expertise, and has thus continued to keep a lead. The Illinois Farm Bureau writes: "Illinois earns the top rank for several reasons. Pumpkins grow well in its climate and in certain soil types. And in the 1920s, a pumpkin processing industry was established in Illinois, Babadoost [a professor at the University of Illinois] says. Decades of experience and dedicated research help Illinois maintain its edge in pumpkin production." According to one report, Libby’s Pumpkin is "the supplier of more than 85 percent of the world’s canned pumpkin."

The farm price of pumpkins varies considerably across states, which suggests that it is costly to ship substantial quantities of pumpkin across moderate distances. For example, the price of pumpkins is lowest in Illinois, where supply is highest, and the Illinois price is consistently below the price for other nearby Midwestern states. This pattern suggests that the processing plants for pumpkins are most cost-effective when located near the actual production.

While all States see year-to-year changes in price, New York stands out because prices have declined every year since 2011. Illinois growers consistently receive the lowest price because the majority of their pumpkins are sold for processing.

Finally, although my knowledge of recipes for pumpkin is considerably more extensive than my knowledge of supply chain for processed pumpkin, it seems plausible that demand for pumpkin is neither the most lucrative of farm products, nor is it growing quickly, so it hasn't been worthwhile for potential competitors in the processed pumpkin market to try to establish an alternative pumpkin-producing hub somewhere else.

Tuesday, November 21, 2017

Will Artificial Intelligence Recharge Economic Growth?

There may be no more important question for the future of the US economy than whether the ongoing advances in information technology and artificial intelligence will eventually (and this "eventually" is central to their argument) translate into substantial productivity gains. Erik Brynjolfsson, Daniel Rock, and Chad Syverson make the case for optimism in "Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics" (NBER Working Paper 24001, November 2017). The paper isn't freely available online, but many readers will have access to NBER working papers through their library. The essay will eventually be part of a conference volume on The Economics of Artificial Intelligence

Brynjolfsson, Rock, and Syverson are making several intertwined arguments. One is that various aspects of machine learning and artificial intelligence are crossing important thresholds in the last few years and the next few years. Thus, even though we tend to think of the "computer age" as having already been in place for a few decades, there is a meaningful sense in which we are about to enter another chapter. The other argument is that when a technological disruption cuts across many parts of the economy--that is, when it is a "general purpose technology" as opposed to a more focused innovation--it often takes a substantial period of time before producers and consumers fully change and adjust. In turn, this means a substantial period of time before the new technology has a meaningful effect on measured economic growth. 

As one example of a new threshold in machine learning, consider image recognition. On various standardized tests for image recognition, the error rate for humans is about 5%. In just the last few years, the error rate for image-recognition algorithms is now lower than the human level--and of course the algorithms likely to keep improving. 
There are of course a wide array of similar examples. The authors cite one study in which an artificial intelligence system did as well as a panel of board-certified dermatologists in diagnosing skin cancer. Driverless vehicles are creeping into use. Anyone who uses translation software or software that relied on voice recognition can attest to how much better it has become in the last few years. 

The author also point to an article from the Journal of Economic Perspectives in 2015, in which Gill Pratt pointed out the potentially enormous advantages of artificial intelligence in sharing knowledge and skills. For example, translation software can be updated and improved based on how everyone uses it, not just on one user. They write about Pratt's essay: 
[Artificial intelligence] machines have a new capability that no biological species has: the ability to share knowledge and skills almost instantaneously with others. Specifically, the rise of cloud computing has made it significantly easier to scale up new ideas at much lower cost than before. This is an especially important development for advancing the economic impact of machine learning because it enables cloud robotics: the sharing of knowledge among robots. Once a new skill is learned by a machine in one location, it can be replicated to other machines via digital networks. Data as well as skills can be shared, increasing the amount of data that any given machine learner can use.
However, new technologies like web-based technology, accurate vision, drawing inferences, and communicating lessons don't spread immediately. The authors offer the homely example of the retail industry. The idea or invention of of online sales became practical back in the second half of the 1990s. But many of the companies founded for online-sales during the dot-com boom of the late 1990s failed, and the sector of retail that expanded most after about 2000 was warehouse stores and supercenters, not  online sales. Now, two decades later, online sales have almost reached 10% of total retail. 

Why does it take so long? The theme that Brynjolfsson, Rock, and Syverson emphasize is that a revolution in online sales needs more than an idea. It needs innovations in warehouses, distribution, and the financial security of online commerce. It needs producers to think in terms of how they will produce, package, and ship for online sales. It needs consumers to buy into the process. It takes time. 

The notion that general purpose inventions which cut across many industries will take time to manifest their productivity gains, because of the need for complementary inventions, turns out to be a pattern that has occurred before. 

For economists, the canonical comment on this process in the last few decade is due to Robert Solow (Nobel laureate '87) who wrote in an essay in 1987, "You can see the computer age everywhere but in the productivity statistics" (“We’d better watch out,” New York Times Book Review, July 12, 1987, quotation from p. 36). After all, IBM had been producing functional computers in substantial quantities since the 1950s, but the US productivity growth rate had been slow since the early 1970s. When the personal computer revolution, the internet, and surge of productivity in computer chip manufacturing all hit in force the 1990s, productivity did rise for a time. Brynjolfsson, Rock, and Syverson write: 
"For example, it wasn’t until the late 1980s, more than 25 years after the invention of the integrated circuit, that the computer capital stock reached its long-run plateau at about 5 percent (at historical cost) of total nonresidential equipment capital. It was at only half that level 10 years prior. Thus, when Solow pointed out his now eponymous paradox, the computers were finally just then getting to the point where they really could be seen everywhere."
Going back in history, my favorite example of this lag that it takes for inventions to diffuse broadly is from the invention of the dynamo for generating electricity, a story first told by economic historian Paul David back in a 1991 essay. David points out that large dynamos for generating electricity existed in the 1870s. However, it wasn't until the Paris World Fair of 1900 that electricity was used to illuminate the public spaces of a city. And it's not until the 1920s that innovations based on electricity make a large contribution to US productivity growth. 

Why did it take so long for electricity to spread? Shifting production away from being  powered by waterwheels to electricity was a long process, which involved rethinking, reorganizing, and relocating factories. Products that made use of electricity like dishwashers, radios, and home appliances could not be developed fully or marketed successfully until people had access to electricity in their homes. Large economic and social adjustments take time time.

When it comes to machine learning, artificial intelligence, and economic growth, it's plausible to believe that we are closer to the front end of our economic transition than we are to the middle or the end. Some of the more likely near-term consequences mentioned by Brynjolfsson, Rock, and Syverson include a likely upheaval in the call center industry that employs more than 200,000 US workers, or how automated driverless vehicles (interconnected, sharing information, and learning from each other) will directly alter one-tenth or more of US jobs. My suspicion is that the changes across products and industries will be deeper and more sweeping than I can readily imagine.

Of course, the transition to the artificial intelligence economy will have some bumps and some pain, as did the transitions to electrification and the automobile. But the rest of the world is moving ahead. And history teaches that countries which stay near the technology frontier, and face the needed social adjustments and tradeoffs along the way,  tend to be far happier with the choice in the long run than countries which hold back. 

Monday, November 20, 2017

Why Has Life Insurance Ownership Declined?

Back in the first half of the 19th century, life insurance was unpopular in the US because it was broadly considered to be a form of betting with God against your own life. After a few decades of insurance company marketing efforts, life insurance was transformed into a virtuous purchase for any good and devout husband. But in recent decades, life insurance has been in decline.

Daniel Hartley, Anna Paulson, and Katerina Powers look at recent patterns of life insurance and bring the puzzle of its decline into sharper definition in "What explains the decline in life insurance ownership?" in Economic Perspectives, published by the Federal Reserve Bank of Chicago (41:8,   2017). The story of shifting attitudes toward life insurance in the 19th century US is told by Viviana A. Zelizer in a wonderfully thought-provoking 1978 article, "Human Values and the Market: The Case of Life Insurance and Death in 19th-Century America," American Journal of Sociology (November 1978, 84:3, pp. 591-610).

With regard to recent patterns, Hartley, Paulson, and Powers write: "Life insurance ownership has declined markedly over the past 30 years, continuing a trend that began as early as 1960. In 1989, 77 percent of households owned life insurance (see figure 1). By 2013, that share had fallen to 60 percent." In the figure, the blue line shows any life insurance, the red line shows the decline in term life, and the gray line shows the decline in cash value life insurance.


Early the 19th century, the costs of death and funerals were largely a family and neighborhood affair. As Zelizer points out, attitudes at the time, life insurance was commercially unsuccessful because it was viewed as betting on death. It was widely believed that such a bet might even hasten death, with with blood money being received by the life insurance beneficiary. For example, Zelizer wrote:

"Much of the opposition to life insurance resulted from the apparently speculative nature of the enterprise; the insured were seen as `betting' with their lives against the company. The instant wealth reaped by a widow who cashed her policy seemed suspiciously similar to the proceeds of a winning lottery ticket. Traditionalists upheld savings banks as a more honorable economic institution than life insurance because money was accumulated gradually and soberly. ...  A New York Life Insurance Co. newsletter (1869, p. 3) referred to the "secret fear" many customers were reluctant to confess: `the mysterious connection between insuring life and losing life.' The lists compiled by insurance companies in an effort to respond to criticism quoted their customers' apprehensions about insuring their lives: "I have a dread of it, a superstition that I may die the sooner" (United States Insurance Gazette [November 1859], p. 19). ... However, as late as the 1870s, "the old feeling that by taking out an insurance policy we do somehow challenge an interview with the 'king of terrors' still reigns in full force in many circles" (Duty and Prejudice 1870, p. 3). Insurance publications were forced to reply to these superstitious fears. They reassured their customers that "life insurance cannot affect the fact of one's death at an appointed time" (Duty and Prejudice 1870, p. 3). Sometimes they answered one magical fear with another, suggesting that not to insure was "inviting the vengeance of Providence" (Pompilly 1869). ... An Equitable Life Assurance booklet quoted wives' most prevalent objections: "Every cent of it would seem to me to be the price of your life .... it would make me miserable to think that I were to receive money by your death .... It seems to me that if [you] were to take a policy [you] would be brought home dead the next day" (June 1867, p. 3)."
However, over the course of several decades, insurance companies marketed life insurance with a message that it was actually a loving duty to one's family for a devout husband. As Zelizer argues, the rituals and institutions of what society viewed as a "good death" altered. She wrote:
"From the 1830s to the 1870s life insurance companies explicitly justified their enterprise and based their sales appeal on the quasi-religious nature of their product. Far more than an investment, life insurance was a `protective shield' over the dying, and a consolation `next to that of religion itself' (Holwig 1886, p. 22). The noneconomic functions of a policy were extensive: `It can alleviate the pangs of the bereaved, cheer the heart of the widow and dry the orphans' tears. Yes, it will shed the halo of glory around the memory of him who has been gathered to the bosom of his Father and God' (Franklin 1860, p. 34). ... life insurance gradually came to be counted among the duties of a good and responsible father. As one mid-century advocate of life insurance put it, the man who dies insured and `with soul sanctified by the deed, wings his way up to the realms of the just, and is gone where the good husbands and the good fathers go' (Knapp 1851, p. 226). Economic standards were endorsed by religious leaders such as Rev. Henry Ward Beecher, who pointed out, `Once the question was: can a Christian man rightfully seek Life Assurance? That day is passed. Now the question is: can a Christian man justify himself in neglecting such a duty?' (1870)."
Zelizer's work is a useful reminder that many products, including life insurance, are not just about prices and quantities in the narrow economic sense, but are also tied to broader social and institutional patterns.  

The main focus of Hartley, Paulson, and Powers is to explore the extent to which shifts in socioeconomic and demographic factors can explain the fall in life insurance: that is, have socioeconomic or demographic groups that were less likely to buy life insurance become larger over time? However, after doing a breakdown of life insurance ownership by race/ethnicity, education level, and income level, they find that the decline in life insurance is widespread across pretty much all groups. In other words, the decline in life insurance doesn't seem to be (primarily) about socioeconomic or demographic change, but rather about other factors. They write: 
"Instead, [life insurance] ownership has decreased substantially across a wide swath of the population. Explanations for the decline in life insurance must lie in factors that influence many households rather than just a few. This means we need to look beyond the socioeconomic and demographic factors that are the focus of our analysis. A decrease in the need for life insurance due to increased life expectancy is likely to be an especially important part of the explanation. In addition, other potential factors include changes in the tax code that make the ability to lower taxes through life insurance less attractive, lower interest rates that also reduce incentives to shelter investment gains from taxes, and increases in the availability and decreases in the cost of substitutes for the investment component of cash value life insurance." 
It's intriguing to speculate about what the decline in life insurance purchases tells us about our modern attitudes and arrangements toward death, in a time of longer life expectancies, more households with two working adults, the backstops provided by Social Security and Medicare, and perhaps also shifts in how many people feel that their souls are sanctified (in either a religious or a secular sense) by the purchase of life insurance. 

Friday, November 17, 2017

Brexit: Still a Process, Not Yet a Destination

I happened to be in the United Kingdom on a long-planned family vacation on June 23, 2016, when the Brexit vote took place. At the time, I offered a stream-of-consciousness "Seven Reflections on Brexit" (June 26, 2016). But more than year has now passed, and Thomas Sampson sums up the research on what is known and what might come next in "Brexit: The Economics of International Disintegration," which appears in the Fall 2017 issue of the Journal of Economic Perspectives.

(As regular readers know, my paying job--as opposed to my blogging hobby--the Managing Editor of the JEP. The American Economic Association has made all articles in JEP freely available, from the most recent issue back to the first. For example, you can check out the Fall 2017 issue here.)

Here's Sampson's basic description of the UK and its position in the international economy before Brexit. For me, it's one of those descriptions that doesn't use any weighted rhetoric, but nonetheless packs a punch.
"The United Kingdom is a small open economy with a comparative advantage in services that relies heavily on trade with the European Union. In 2015, the UK’s trade openness, measured by the sum of its exports and imports relative to GDP, was 0.57, compared to 0.28 for the United States and 0.86 for Germany (World Bank 2017). The EU accounted for 44 percent of UK exports and 53 percent of its imports. Total UK–EU trade was 3.2 times larger than the UK’s trade with the United States, its second-largest trade partner. UK–EU trade is substantially more important to the United Kingdom than to the EU. Exports to the EU account for 12 percent of UK GDP, whereas imports from the EU account for only 3 percent of EU GDP. Services make up 40 percent of the UK’s exports to the EU, with “Financial services” and “Other business services,” which includes management consulting and legal services, together comprising half the total. Brexit will lead to a reduction in economic integration between the United Kingdom and its main trading partner."
A substantial reduction in trade will cause problems for the UK economy. Of course, the estimates will vary according to just what model is used, and Sampson runs through the main possibilities. He summarizes in this way: 
"The main conclusion of this literature is that Brexit will make the United Kingdom poorer than it would otherwise have been because it will lead to new barriers to trade and migration between the UK and the European Union. There is considerable uncertainty over how large the costs of Brexit will be, with plausible estimates ranging between 1 and 10 percent of UK per capita income. The costs will be lower if Britain stays in the European Single Market following Brexit. Empirical estimates that incorporate the effects of trade barriers on foreign direct investment and productivity find costs 2–3 times larger than estimates obtained from quantitative trade models that hold technologies fixed."
What will come next after Brexit isn't yet clear, and may well take years to negotiate. In the meantime, the main shift seems to be that the foreign exchange rate for the pound has fallen, while inflation has risen, which means that real inflation-adjusted wages have declined. This national wage cut has helped keep Britain's industries competitive on world markets, but it's obviously not a desirable long-run solution.

But in the longer run, as the UK struggles to decide what actually comes next after Brexit, Sampson makes a distinction worth considering: Is the opposition to Brexit about national identity and taking back control, even if it makes the country poorer, or is it about renegotiating trade agreements and other legislation to do more to address the economic stresses created by globalization and technology? He writes:

"Support for Brexit came from a coalition of less-educated, older, less economically successful and more socially conservative voters who oppose immigration and feel left behind by modern life. Leaving the EU is not in the economic interest of most of these left-behind voters. However, there is currently insufficient evidence to determine whether the leave vote was primarily driven by national identity and the desire to “take back control” from the EU, or by voters scapegoating the EU for their
economic and social struggles. The former implies a fundamental opposition to deep economic and political integration, even if such opposition brings economic costs, while the later suggests Brexit and other protectionist movements could be addressed by tackling the underlying reasons for voters’ discontent."
For me, one of the political economy lessons of Brexit is that relatively easy to get a majority against a specific unpopular element of the status quo, while leaving open the question of what happens next. It's a lot harder to get a majority in favor of a specific change. That problem gets even harder when it comes to international agreements, because while it's easy for UK politicians to make pronouncements on what agreements the UK would prefer, trade negotiators in the EU, the US, and the rest of the world have a say, too. Sampson discusses the main post-Brexit options, and I've blogged about them in "Brexit: Getting Concrete About Next Steps" (August 2, 2016). While the process staggers along, this "small open economy with a comparative advantage in services that relies heavily on trade with the European Union" is adrift in uncertainty.

Thursday, November 16, 2017

US Wages: The Short-Term Mystery Resolved

The Great Recession ended more than eight years ago, in June 2009. The US unemployment rate declined slowly after that, but it has now been below 5.0% every month for more than two years, since September 2015. Thus, an ongoing mystery for the US economy is: Why haven't wages started to rise more quickly as the labor market conditions improved? Jay Shambaugh, Ryan Nunn, Patrick Liu, and Greg Nantz provide some factual background to address this question in "Thirteen Facts about Wage Growth," written for the Hamilton Project at the Brookings Institution (September 2017).  The second part of the report addresses the question: "How Strong Has Wage Growth Been since the Great Recession?"

For me, one surprising insight from the report is that real wage growth--that is, wage growth adjusted for inflation--has actually not been particularly slow during the most recent upswing. The upper panel of this figure shows real wage growth since the early 1980s. The horizontal lines show the growth of wages after each recession. The real wage growth in the last few years is actually higher. The bottom panel shows nominal wage growth, with inflation included. By that measure, wage growth in recent years is lower than after the last few recessions. Thus, I suspect that one reason behind the perception of slow wage growth is that many people are focused on nominal rather than on real wages.


Government statistics offer a lot of ways of measuring wage growth. The graphs above are wage growth for "real average hourly earnings for production and nonsupervisory workers," which is about 100 million of the 150 million workers.

An alternative and broader approach looks what is called the Employment Cost Index, which is based on a National Compensation Survey of employers. To adjust for inflation, I use the measure of inflation called the Personal Consumption Expenditures price index, which is the inflation just for the personal consumption part of the economy that is presumably most relevant to workers. I also use the version of this index that strips out jumps in energy and food prices. This is the measure of the inflation rate that the Federal Reserve actually focuses on.

Economists using these measures were pointing out a couple of years ago that real wages seemed to be on the rise. The blue line shows the annual change in wages and salaries for all civilian workers, using the ECI, while the redline shows the PCE measure of inflation. The gap between the two is the real gain in wages, which you can see started to emerge in 2015.

Not only has the recovery in US real wages been a bit higher than usual for the last few decades, and especially prominent in the last couple of years, but there is good reason to believe that the wage statistics since the Great Recession may be picking up a change in the composition of the workforce that tends to make wage growth look slower. Shambaugh, Nunn, Liu, and Nantz explain (citations and footnotes omitted):
"In normal times, entrants to full-time employment have lower wages than those exiting, which tends to depress measured wage growth. During the Great Recession this effect diminished substantially when an unusual number of low-wage workers exited full-time employment and few were entering. After the Great Recession ended, the recovering economy began to pull workers back into full-time employment from part-time employment ... and nonemployment, while higher-paid, older workers left the labor force. Wage growth in the middle and later parts of the recovery fell short of the growth experienced by continuously employed workers, reflecting both the retirements of relatively high-wage workers and the reentry of workers with relatively low wages. In 2017 the effect of this shifting composition of employment remains large, at more than 1.5 percentage points. If and when growth in full-time employment slows, we can expect this effect to diminish somewhat, providing a boost to measured wage growth."
The baby boomer generation is hitting retirement and leaving the labor force, as relatively highly-paid workers at the end of their careers. New workers entering the labor force, together with low-skilled workers being drawn back into the labor force, tend to have lower wages and salaries. This makes wage growth look low--but what's happening is in part a shift in types of workers. 

One other fact from Shambaugh, Nunn, Liu, and Nantz is that wage growth has been strong at the bottom and the top of the wage distribution, but slower in the middle. This figure splits the wage distribution into five quintiles, and shows the wage growth for production and nonsupervisory workers in each. 

Taking these factors together, the "mystery" of why wages haven't recovered more strongly since the end of the Great Recession appears to be resolved. However, a bigger mystery remains. Why have wages and salaries for production and nonsupervisory workers done so poorly not in the last few years, but over the last few decades?

There's a long list of potential reasons: slow productivity growth, rising inequality, dislocations from globalization and new technology, a slowdown in the rate of start-up firms, weakness of unions and collective bargaining, less geographic mobility by workers, and others. These factors have been discussed here before, and will be again, but not today. Shambaugh, Nunn, Liu, and Nantz provide some background figures and discussion of these longer-term factors, too. 

Wednesday, November 15, 2017

Rethinking Development: Larry Summers

Larry Summers delivered a speech on the subject of "Rethinking Global Development Policy for the 21st Century" at the Center for Global Development on November 8, 2017. A video of the 45-minute lecture is here. Here are a few snippets, out of many I could have chosen:

The dramatic global convergence between rich and poor
"There has been more convergence between poor people in poor countries and rich people in rich countries over the last generation than in any generation in human history. The dramatic way to say it is that between the time of Pericles and London in 1800, standards of living rose about 75 percent in 2,300 years. They called it the Industrial Revolution because for the first time in human history, standards of living were visibly and 2 meaningfully different at the end of a human lifespan than they had been at the beginning of a human lifespan, perhaps 50 percent higher during the Industrial Revolution. Fifty percent is the growth that has been achieved in a variety of six-year periods in China over the last generation and in many other countries, as well. And so if you look at material standards of living, we have seen more progress for more people and more catching up than ever before. That is not simply about things that are material and things that are reflected in GDP. ... [I]f current trends continue, with significant effort from the global community, it is reasonable to hope that in 2035 the global child mortality rate will be lower than the US child mortality rate was when my children were born in 1990. That is a staggering human achievement. It is already the case that in large parts of China, life expectancy is greater than it is in large parts of the United States." 

The marginal benefit of development aid is what is enabled, not what is funded
"I remember as a young economist who was going to be the chief economist of the World Bank sitting and talking with Stan Fischer, who was my predecessor as the chief economist of the World Bank. And we were talking, and I was new to all this. I had never done anything in the official sector. And I said, "Stan, I don't get it. If a country has five infrastructure projects and the World Bank can fund two of them, and the World Bank is going to cost-benefit analyze and the World Bank is going to do all its stuff, I would assume what the country does is show the World Bank its two best infrastructure projects, because that will be easiest, and if it gets money from the World Bank, then it does one more project, but what the World Bank is actually buying is not the project it is being shown, it is the marginal product that it is enabling. And so why do we make such a fuss of evaluating the particular quality of our projects?" And Stan listened to me. And he looked at me. He's a very wise man. And he said, "Larry, you know, it is really interesting. When I first got to the bank, I always asked questions like that." "But now I've been here for two years, and I don't ask questions like that. I just kind of think about the projects, because it is kind of too hard and too painful to ask questions like that."
Funds from the developing world governments and multilateral institutions have much less power
"[O]ur money—and I mean by that our assistance and the assistance of the multilateral institutions in which we have great leverage—is much less significant than it once was. Perhaps the best way to convey that is with a story. In 1991, when I was new to all of this, I was working as the chief economist of the World Bank, and the first really important situation in which I had any visibility at all was the Indian financial crisis that took place in the summer of 1991. And at that point, India was near the brink. It was so near the brink that, at least as I recall the story, $1 billion of gold was with great secrecy put on a ship by the Indians to be transported to London, where it could be collateral for an emergency loan that would permit the Indian government to meet its payroll at the end of the month.  And at that moment, the World Bank was in a position over the next year to lend India $3 billion in conjunction with its economic reform program. And the United States had an important role in shaping the World Bank's strategy. Well, that $3 billion was hugely important to the destiny of a sixth of humanity. Today, the World Bank would have the capacity to lend India in a year $6 billion or $7 billion. But India has $380 billion—$380 billion—in reserves dominantly invested in Treasury bills earning 1 percent. And India itself has a foreign aid budget of $5 billion or $6 billion. And so the relevance of the kind of flows that we are in a position to provide officially to major countries is simply not what it once was."
Protecting the world from pandemic flu vs. the salary of a college football coach
"[T]he current WHO budget for pandemic flu is less than the salary of the University of Michigan's football coach—not to mention any number of people who work in hedge funds. And that seems manifestly inappropriate. And we do not yet have any settled consensus on how we are going to deal with global public goods and how that is going to be funded."

Tuesday, November 14, 2017

Regional Price Parities: Comparing Cost of Living Across Cities and States

Many years ago I heard a story from a member of a committee of a midwestern university that was thinking about hiring a certain economist. The economist had an alternative offer from a southern California university that paid a couple of thousand dollars more in annual salary. The economist offered to come to the midwestern university if it would match this slightly higher salary . But the hiring committee declined to match . As the story was told to me, the hiring committee talked it over and felt: "Spending a couple of thousand dollars more isn't actually the issue. The key fact cost of living is vastly higher in southern California. An economist who isn't able to recognize that fact--and thus who doesn't recognize that the lower salary actually buys a higher standard of living here in the midwest--isn't someone we want for our department."

The point is a general one. Getting a higher salary in California or New York, and then needing to pay more for housing and perhaps other costs of living as well, can easily eat up that higher salary. In fact, the Bureau of Economic Analysis now calculates Regional Price Parities, which adjust for higher or lower levels of housing, goods, and services across areas. Comparisons are available at the state level, the metropolitan-area level, and for non-metro areas within states. To illustrate, here are a couple of maps taken from "Living Standards in St. Louis and theEighth Federal Reserve District: Let’s Get Real," an article by Cletus C. Coughlin, Charles S. Gascon, and Kevin L. Kliesen in the Review of the Federal Reserve Bank of St. Louis (Fourth Quarter 2017, pp. 377-94).

Here are the US states color-coded according to per capita GDP. For example, you can see that California and New York are in the highest category. My suspicion is that states like Wyoming, Alaska, and North Dakota are in the top category because of their energy production.



And now here are the US states color-coded according to per capita GDP with an adjustment for Regional Price Parities: that is, it's a measure of income adjusted for what it actually costs to buy housing and other goods. With that change, California, New York, and Maryland are no longer in the top category. Hoever, a number of midwestern states like Kansas, Nebraska, South Dakota, and my own Minnesota move into the top category. A number of states in the mountain west and south  that were in the lowest-income category when just looking at per capita GDP move up a category or two when the Regional Price Parities are taken into account.


When thinking about political and economic differences across states, these differences in income levels,  housing prices, and other costs-of-living are something to take into account.