Monday, October 5, 2015

Mass Shootings: Trends and Categories

The vile and ghastly news of the mass shooting last week at Umpqua Community College in Roseburg, Oregon, sent me to the July 2015 report "Mass Murder with Firearms: Incidents and Victims, 1999-2013," written by William J. Krouse and Daniel J. Richardson for the Congressional Research Service.

The most-used definition of a "mass shooting" is that four or more people are killed in a single incident. Here's the pattern from 1999-2013. 

Mass shootings can be divided into three categories: mass public shootings, "familicide" shootings that involve a group of family members, and felony mass shootings that would include gang executions, botched robberies and hold-ups,and the like. The chart above includes all three of these. The breakdown in the three categories is below. Mass public shootings have about half of the incidents of each of the other two categories, but include almost as many victims killed and more victims injured. 

What if we just look at the pattern for mass public shootings over time, leaving out the familicides and other felony mass shootings? Here's the pattern. If you squint a bit at 2007, 2009 and 2012, you can sort of imagine an upward trend here, but given that there's a lot of annual fluctuation, it's not clear that the trend is a meaningful one over this time frame. 
However, if one takes a longer time-frame going back several decades, it does appear that mass public shootings have risen. Krouse and Richardson write:

With data provided by criminologist Grant Duwe, CRS [the Congressional Research Service] also compiled a 44-year (1970-2013) dataset of firearms-related mass murders that could arguably be characterized as “mass public shootings.” These data show that there were on average:
• one (1.1) incident per year during the 1970s (5.5 victims murdered, 2.0 wounded per incident),
• nearly three (2.7) incidents per year during the 1980s (6.1 victims murdered, 5.3 wounded per incident),
• four (4.0) incidents per year during the 1990s (5.6 victims murdered, 5.5 wounded per incident),
• four (4.1) incidents per year during the 2000s (6.4 victims murdered, 4.0 wounded per incident), and
• four (4.5) incidents per year from 2010 through 2013 (7.4 victims murdered, 6.3
wounded per incident).
These decade-long averages suggest that the prevalence, if not the deadliness, of “mass public shootings” increased in the 1970s and 1980s, and continued to increase, but not as steeply, during the 1990s, 2000s, and first four years of the 2010s.
As another way of illustrating this longer-term upward pattern of mass public shootings, consider only mass public shootings in which 10 or more people were killed. Up through 2013, there had been 13 such episodes in modern US history: seven occurring from 2007-2013, and six during the 34 years from 1966 to 2006. 

It feels almost mandatory for me to tack on some policy recommendation at the end of this post, but I'll resist. Some of the newly dead in Oregon are not even in their graves yet. There have been 4-5 mass public shootings each year for the last quarter-century.The US population is 320 million.  In an empirical social-science sense, it's probably impossible to prove that any particular policy would reduce this total from 4-5 per year back to the 1-3 mass public shootings per year that happened in the 1970s and 1980s. In such a situation, policy proposals (whether the proposal is to react in certain ways or not to react in certain ways) will inevitably be based on a mixture of grief, outrage, preconceived beliefs, and hope, not on evidence.

Friday, October 2, 2015

An Interview with Amy Finkelstein: Health Insurance, Adverse Selection, and More

Douglas Clement has an "Interview with Amy Finkelstein" in the September 2015 issue of The Region, which is published by the Minneapolis Federal Reserve. Finkelstein has done a lot of her most prominent work looking at issues of insurance and risk: especially health insurance, but also long-term care insurance, annuities, and others. She's a theory-tester: that is, an empirical researcher who works with a keen awareness of what the previous accepted underlying theories might seem to imply. Back in 2012, Finkelstein was awarded the very prestigious John Bates Clark medal, given annually to an "American economist under the age of forty who is judged to have made the most significant contribution to economic thought and knowledge." In the Fall 2012 issue of the Journal of Economic Perspectives (where I labor in the fields as Managing Editor), Jonathan Levin and James Poterba offered an overview of Finkelstein's earlier career.

For example, standard models of the economics of insurance suggest that people who know that they are more likely to receive the insurance payout (more likely to get sick, for example) are more likely to seek out generous insurance policies. Sellers of Insurance need to beware this "adverse selection" dynamic, as it is called, or they can end up pricing their insurance as if it was for the average person, and then ending up with much higher payouts than expected. But does the evidence support the theory? Finkelstein points out that in a number of studies, those who get the insurance often do not end up receiving greater payouts. A possible reason is that some people are pretty safe risks in part because they are quite risk-averse, so they are more likely to purchase insurance and less likely to use it. Here are some comments from Finkelstein:

Suppose you have people—in health insurance we often refer to them as the “worried well”—who are healthy, so a low-risk type for an insurer, but also risk averse: They’re worried that if something happens, they want coverage. ... As a result, people who are low risk, but risk averse, will also demand insurance, just as high-risk people will. And it’s not obvious whether, on net, those with insurance will be higher risk than those without. ... We looked at long-term care insurance—which covers nursing homes—and rates of nursing home use. We found that individuals with long-term care insurance were not more likely to go into a nursing home than those without it, as standard adverse selection theory would predict. In fact, they often looked less likely to go into a nursing home. These results held even after controlling for what the insurance company likely knew about the individual, and priced insurance on. ... [O]our data gave us a way to detect private information: people’s self-reported beliefs about their chance of going into a nursing home. And we showed that people who think they have a higher chance of going into a nursing home are both more likely to buy long-term care insurance and more likely to go into a nursing home. ... That certainly sounds like the standard adverse selection models! ... Then we found some examples in the data that we broadly interpreted as proxies for preferences such as risk aversion, and we found that individuals who report being more likely to, for example, get flu shots, or more likely to wear seatbelts, were both more likely to buy long-term care insurance and less likely to subsequently go into a nursing home.
In another prominent line of work. Finkelstein and several co-authors looked at the question of geographic variation in health care costs--that is, the well-known fact that health care utilization and spending per person is much higher in some urban areas and states than in others. They asked the question: What happens if a person relocates from a high-utilization, high-cost area to a low-cost, low utilization area? If one believes that health care decisions are determined by a mixture of patient expectations and what local health care providers think of as "best practice," one might expect the health care usage of those who relocate to gradually trend toward the patterns of their new geographic location. But that's not what happens. Finkelstein explains:
"We ... look at people who moved geographically across areas with different patterns of health care utilization (i.e., high-utilization versus low-utilization areas) and whether their health care utilization changed. Originally, we were very focused on this issue of habit formation, which would suggest a very specific conceptual model and econometric specification. ... So you would expect, in a model with habit formation, that maybe initially there wouldn’t be much change in your health care utilization. But over time—whether it’s because doctors would be urging you to do less or the people around you were like, “Why go to the doctor when you have a minor pain?”—you would gradually change your behavior toward the new norm.But that’s just not what we see at all. We have about 11 years of data on Medicare beneficiaries and about 500,000 of them who move across geographic areas. When they do, we see a clear, on-impact change: When you move from a high-spending to a low-spending place, or vice versa, you jump about 50 percent of the way to the spending patterns of the new place. But then your behavior doesn’t change any further. ...  We estimate that about half of the geographic variation in health care utilization reflects something “fixed” about the patient that stays with them when they move, such as their health or their preferences for medical care. And about half of the geographic variation in health care utilization reflects something about the place, such as the beliefs and styles of the doctors there, or the availability of various medical technologies. This gives you a very different perspective on how to think about the geographic variation in health care spending than the prior conventional wisdom that most of the geographic variation in the health care system was due to the supply side—that is, something about the place rather than the patient.
In the last few years, some of Finkelstein's most prominent research has been an analysis of data generated by an experiment in the state of Oregon. Back in 2008, the state of Oregon wanted to expand Medicaid coverage to low-income people who wouldn't have otherwise been eligible for Medicaid. The state realized that it didn't have enough money to offer the expanded health insurance to everyone, so it held a lottery. From an academic research point of view, this decision was a dream come true, because it becomes possible to compare health and life outcomes for two very similar groups--one randomly chosen to receive additional health insurance and one not. Finkelstein and a team of co-authors were on the job. Finkelstein describes some of their findings:
For health care use, we found across the board that Medicaid increases health care use: Hospitalizations, doctor visits, prescription drugs and emergency room use all increased. On the one hand, this is economics 101. Demand curves slope down: When you make something less expensive, people buy more of it. And what health insurance does, by design, is lower the price of health care for the patient. ... On the other hand, there were ways in which these results were surprising. For Medicaid, in particular, there’s been a lot of conjecture that while in general, health insurance would increase use of health care, that because Medicaid reimbursement rates to providers are so low, providers wouldn’t want to treat Medicaid patients. ... Our findings reject this view. We find compelling evidence from a randomized evaluation that relative to being uninsured, Medicaid does increase use of health care. Another result that some found surprising was on use of the emergency room. There had been claims in policy circles that covering the uninsured with Medicaid might get them out of the emergency room … The hope that ER use would go down comes from the belief that doctor visits are substitutes for the ER, so when the doctor also becomes free, you go to the doctor instead of the emergency room. Maybe this is the case (or maybe it isn’t), but on net, our results show any substitution for the doctor that may exist is just not outweighed by the direct effect of making the emergency room free. On net, Medicaid increases use of the emergency room, at least in the first one to two years of coverage we are able to look at.
A variety of other findings have emerged from this research, which is ongoing. In the Oregon data, the additional health insurance reduced financial risk for households, and perhaps not coincidentally, also led to improvements in mental health status (measured both by self-reported mental health and by the proportion diagnosed with depression). In terms of measures of physical health, Finkelstein reports, "we did not detect statistically significant effects on the physical health measures we studied: blood sugar, cholesterol and blood pressure."

The expansion of Medicaid in Oregon clearly brought at least some benefits to the previously uninsured. But what the cost to the state worth the benefits to the individuals? Finkelstein and a couple of co-authors tried to model what the insurance was worth to those receiving it. They found:
[O]ur central estimate is that the value of Medicaid to a recipient is about 20 to 40 cents per dollar of government expenditures. ... The other key finding is that the nominally “uninsured” are not really completely uninsured. We find that, on average, the uninsured pay only about 20 cents on the dollar for their medical care. This has two important implications. First, it’s a huge force working directly to lower the value of Medicaid to recipients; they already have substantial implicit insurance. ... Second and, crucially, the fact that the uninsured have a large amount of implicit insurance is also a force saying that a lot of spending on Medicaid is not going directly to the recipients; it’s going to a set of people who, for want of a better term, we refer to as “external parties.” They’re whoever was paying for that other 80 cents on the dollar.

For those who would likes some additional doses of Finkelstein, I've posted a couple of times as results from the Oregon study were publishes, and you can check them out at "Effects of Health Insurance: Randomized Evidence from Oregon" (August 31, 2012) and "Why the Uninsured Don't Have More Emergency Room Visits" (January 6, 2014). Finkelstein has also published several articles in the Journal of Economic Perspectives, once on the subject of "Long-Term Care Insurance in the United States" (November 22, 2011) and another time in an article with Liran Einav in the Winter 2011 issue on the analysis of a "Selection in Insurance Markets: Theory and Empirics in Pictures."

Thursday, October 1, 2015

Causes of Wealth Inequality: Dynastic, Valuation, or Income?

There are at least three reasons why inequality of wealth could remain high or rise over time: 1) dynastic reasons, in which inherited wealth looms larger over time; 2) valuation issues, as when the price of existing assets like stocks or real estate soars for a time; and 3) a surge of inequality at the very top of the income distribution which generates a corresponding inequality in wealth. Richard Arnott, William Bernstein, and Lillian Wu "agree that inequality of wealth has intensified in the recent past." However, challenge the importance of the dynastic explanation and emphasize the latter two causes in their essay, "The Myth of Dynastic Wealth: The Rich Get Poorer," which appears n the Fall 2015 issue of the Cato Journal.

A substantial chunk of their essay is a review and critique of the arguments in Thomas Piketty's 2013 book Capital in the Twenty-First Century. I assume that even readers of this blog, who are perhaps more predisposed than normal humans to find such a discussion of interest, have mostly had enough of that. For those who want more, some useful starting points are my post on "Piketty and Wealth Inquality" (February 23, 2015) and on "Digging into Capital and Labor Income Shares" (March 20, 2015).

Here, I want to focus instead on the empirical evidence Arnott, Bernstein, and Wu about dynastic wealth in the United States. They focus on evidence from the Forbes 400 list of the wealthiest Americans, which has been published since 1982. They look both at how many famous fortunes of the earlier part of the 20th century survived to be on this list, and at the evolution of who is on this list over time. They write:

Take, as a counterexample, the Vanderbilt family. When the family converged for a reunion at Vanderbilt University in 1973, not one millionaire could be found among the 120 heirs and heiresses in attendance. So much for the descendants of Cornelius Vanderbilt, the richest man in the world less than a century before. ... The wealthiest man in the world in 1918 was John David Rockefeller, with an estimated net worth of $1.35 billion. This was a whopping 2 percent of the U.S. GDP of $70 billion at that time, nearly two million times our per capita GDP, at a time when the nation was the most prosperous in the world. An equivalent share of U.S. GDP today would translate into a fortune of over $300 billion.  ...  The Rockefellers ... scored 13 seats on the 1982 Forbes debut list, with collective wealth of $7 billion in inflation-adjusted 2014 dollars. As of 2014, only one Rockefeller (David Rockefeller, who turned 100 in June 2015) remains, with a net worth of about $3 billion.If dynastic wealth accumulation were a valid phenomenon, we would expect little change in the composition of the Forbes roster from year to year. Instead, we find huge turnover in the names on the list: only 34 names on the inaugural 1982 list remain on the 2014 list ... 

Arnott, Bernstein, and Wu offer a number of ways in which dynastic wealth is eroded from one generation to the next: 1) low returns (including when the rich "fall prey to knaves); 2) investment expenses paid to "bank trust companies, `wealth management' experts, estate attorneys, and the like"; 3) income, capital gains, and estate taxes; 4) charitable giving; 5) when fortunes are divided among heirs; and 6) spending, as in when some heirs do a lot of it. Their overall finding based on the patterns in their data is that among the hyper-wealthy, the common pattern is for real net worth to be cut in half every 14 years or so, and for it to decline by about 70% from one generation to the next.

If the inequality of wealth is not a dynastic phenomenon and dynastic wealth in fact tends to fade with time, then why has inequality of wealth remained high in recent decades. Arnott, Bernstein, and Wu suggest two alternatives.

One is the huge run-up in asset values in recent decades, including the stock market. However, the authors make an important and intriguing point about these valuations. From a long-run viewpoint, gains from stock market investment need to be connected to the profits earned by companies. In the last few decades, a major change in US stock market is that dividends paid by firms have dropped. In In the past, those who owned stock looked less wealthy right now, but because of owning stock they could often expect to receive a hefty stream of dividend payments in the future. Now, those who own stock look more wealthy right now (after the run-up in stock prices), but they appear likely to receive a lower stream of dividend payments in the future. Thus, more of the future profit performance of a company is showing up in the current price of the stock, and less as a payment of dividends in the future. This is a more complex phenomenon than a simple rise in wealth inequality.

The other change that they point to are the enormous payments being received by corporate executives, often through stock option. The authors are writing in a publication of the libertarian Cato Institute. Thus, it is no surprise when they write: "We have no qualms about paying entrepreneurial rewards (i.e., vast compensation) to executives who create substantial wealth for their shareholders or
who facilitate path-breaking innovations and entrepreneurial growth." But then they go on to add:
But an abundance of research shows little correlation between executive compensation and shareholder wealth creation (let alone societal wealth creation). Nine-figure compensation packages are so routine they only draw notice when the recipients simultaneously run their companies into the ground, as was the case with Enron, Global Crossing, Lehman Brothers, Tyco, and myriad others. It’s difficult for an entrepreneur to become a billionaire, in share wealth, while running a failing business. How can even mediocre corporate executives take so much of the pie? Bertrand and Mullainathan (2001) cleverly disentangled skill from luck by examining situations in which earnings changes could be reasonably ascribed to luck (say, a fortuitous change in commodity prices or exchange rates). They found that, on average, CEOs were rewarded just as much for “lucky” earnings as for “skillful” earnings. The authors postulate what they term the “skimming” hypothesis: “When the firm is doing well, shareholders are less likely to notice a large pay package.” A governance linkage is also evident: The smaller the board, the more insiders on it, and the longer tenured the CEO, the more flagrant “pay for luck” becomes, while the presence of large shareholders on the board serves to inhibit skimming. Perhaps shareholders should be more attentive to governance?

Wednesday, September 30, 2015

Exchange Rates Moving

Major exchange rates for countries around the world are in the midst of movement that is large by historical standards. The International Monetary Fund offers some background in its October 2015 World Economic Outlook report, specifically in Chapter 3: "Exchange Rates and Trade Flows: Disconnected?"  The main focus of the chapter is on how the movements in exchange rates might affect trade balances, but at least to me, equally interesting is how the movement may affect the global financial picture.

As a starting point, here's a figure showing recent movements in exchange rates for the United States, Japan, the euro area, Brazil, China, and India. In each panel panel of the figure, the horizontal axis runs from 0 to 36 months. The shaded areas show how much exchange rates typically move over a 36 month period using data from January 1980 through June 2015. The darkest shading for 25th/75th percentile means that exchange rates moved historically within this range from 25-75% of the time. The lighter shading for 10th/90th percentile means that exchange rates move in this area from 10-90% of the time. The blue lines show the actual movement of exchange rates using different but recent starting dates for each country (as shown in the  panels). In every case the exchange rate has moved more than the 25th/75th band, and in most cases it is outside the 10th/90th band, too.

As the figure shows, currencies are getting stronger in the US, China, and India, but getting weaker in Japan, the euro area, and Brazil. The IMF describes the patterns this way:
Recent exchange rate movements have been unusually large. The U.S. dollar has appreciated by more than 10 percent in real effective terms since mid-2014. The euro has depreciated by more than 10 percent since early 2014 and the yen by more than 30 percent since mid-2012 ...  Such movements, although not unprecedented, are well outside these currencies’ normal fluctuation ranges. Even for emerging market and developing economies, whose currencies typically fluctuate more than those of advanced economies, the recent movements have been unusually large.
The report focuses on how movements of exchange rates have historically affected prices of imports and exports (which depends on the extent to which importers and exporters "pass through" the changes in exchange rates as they buy and sell), and in turn what that change in import and export prices means for the trade balance.
The results imply that, on average, a 10 percent real effective currency depreciation increases import prices by 6.1 percent and reduces export prices. in foreign currency by 5.5 percent ... The estimation results are broadly in line with existing studies for major economies. ... The results suggest that a 10 percent real effective depreciation in an economy’s currency is associated with a rise in real net exports of, on average, 1.5 percent of GDP, with substantial cross-country variation around this average ...
The estimates of how movements in exchange rates affect trade seem sensible and mainstream to me, but I confess that I am more intrigued and concerned about how changes in exchange rates can affect the global financial picture. In the past, countries often ran into extreme financial difficulties when they had borrowed extensively in a currency not their own--often in US dollars--and then when the exchange rate moved sharply, they were unable to repay. In the last few years, the governments of most emerging market economies have tried to make sure this would not happen, by keeping their borrowing relatively low and by building up reserves of US dollars to be drawn down if needed.

However, there is some reason for concern that a large share of companies in emerging markets have been taking on a great deal more debt, and because a substantial share of that debt is measured in foreign currency, these firms are increasingly exposed to a risk of shifting exchange rates. A different IMF report, the October 2015 Global Financial Stability Report, looks at this issue in Chapter 3: "Corporate Leverage in Emerging Markets--A Concern?" For a sample of the argument, the report notes:
Corporate debt in emerging market economies has risen significantly during the past decade. The corporate debt of nonfinancial firms across major emerging market economies increased from about $4 trillion in 2004 to well over $18 trillion in 2014 ... The average emerging market corporate debt-to-GDP ratio has also grown by 26 percentage points in the same period, but with notable heterogeneity across countries. ...  Leverage has risen relatively more in vulnerable sectors and has tended to be accompanied by worsening firm-level characteristics. For example, higher leverage has been associated with, on average, rising foreign exchange exposures. Moreover, leverage has grown most in the cyclical construction sector, but also in the oil and gas subsector. Funds have largely been used to invest, but there are indications that the quality of investment has declined recently. These findings point to increased vulnerability to changes in global financial conditions and associated capital flow reversals—a point reinforced by the fact that during the 2013 “taper tantrum,” more leveraged firms saw their corporate spreads rise more sharply ...
The relatively benign outcome from shifts in exchange rates is that they tweak prices of exports and imports up and down. The deeper concern arises if the movements in exchange rates lead to substantial debt defaults, or to "sudden stop" movements where large flows of international financial capital that had been heading into a country sharply reverse direction. In the last few decades, this mixture of debt problems and sudden shifts in international capital flows changes has been the starting point for national-level financial crises in east Asia, Russia, Latin America, and elsewhere.

Tuesday, September 29, 2015

Computer Use and Learning: Some Discomfiting International Experience

Greater use computers to support K-12 education sometimes sometimes touted as the magic talisman that will improve quality and control costs. But the OECD provides some discomfiting evidence for such optimism in its recent report: Students, Computers, and Learning: Making the Connection. From the Foreword:
This report provides a first-of-its-kind internationally comparative analysis of the digital skills that students have acquired, and of the learning environments designed to develop these skills. This analysis shows that the reality in our schools lags considerably behind the promise of technology. In 2012, 96% of 15-year-old students in OECD countries reported that they have a computer at home, but only 72% reported that they use a desktop, laptop, or tablet computer at school, and in some countries fewer than one in two students reported doing so. And even where computers are used in the classroom, their impact on student performance is mixed at best. Students who use computers moderately in school tend to have somewhat better learning outcomes than students who use computers comparatively rarely. But students who use computers very frequently in school do a lot worse in most learning outcomes, even after accounting for social background and student demographics. 
Here are a couple of sample results from the OECD report. The horizontal index is a measure of the use of information and communications technology in school. The vertical axis is a measure of scores on reading or math tests. The curve that is shown has been adjusted for the socioeconomic status of students. For reading, the result seems to be that some intermediate level of computer use beats too much or too little. For math, the use of computers doesn't seem to have much benefit at all.

The OECD report makes the point in several places that results like these don't prove that computerized instruction can't work, nor that it isn't working well in some places. The report emphasizes that if computerized instruction actually leads to more time spent on studying, or more efficient use of time spent on studying, it would then have potential to increase learning. But at least for now, looking over the broad spectrum of OECD countries, it seems fair to say that there are places where the use of computer in schools should be higher, and places where it should be lower--and we haven't yet developed best practices for how computerized instruction can work best.

For a previous post with evidence for being skeptical about whether computers at home help learning, see this post from March 29, 2013, with evidence from an experiment done in five school districts in California.

Friday, September 25, 2015

Trends in Employer-Provided Health Insurance

Most people in high-income countries are insulated from the actual cost of health care services. When health care is provided by or billed to the government, knowledge and perception about the full cost of that care becomes muffled. In the United States, reports the Kaiser Foundation, "Employer-sponsored insurance covers over half of the non-elderly population, 147 million people in total." Again, when an employer is paying most of the cost of health insurance, knowledge and perception about the full cost become a matter of whether you read articles full of statistics on health care spending, not personal experience. Of course, there are good health and safety reasons why one might prefer that patients and health care providers not be thinking at every moment about how to pinch pennies and cut costs. But for society as a whole, the costs of health care don't vanish just because most patients and health care providers would prefer not to talk about them--or even to know very much about them.

 The Kaiser Foundation along with the Health Research & Educational do an annual annual survey of private and nonfederal public employers with three or more workers, and the results of the "2015 Employer Health Benefits Survey" are available here. In what follows, I'll quote mainly from the "Summary of Findings."
"The key findings from the survey, conducted from January through June 2015, include a modest increase (4%) in the average premiums for both single and family coverage in the past year. The average annual single coverage premium is $6,251 and the average family coverage premium is $17,545. The percentage of firms that offer health benefits to at least some of their employees (57%) and the percentage of workers covered at those firms (63%) are statistically unchanged from 2014. ... Employers generally require that workers make a contribution towards the cost of the premium. Covered workers contribute on average 18% of the premium for single coverage and 29% of the premium for family coverage ..."
Here's how the average employer-sponsored health insurance premium has risen during from 2005 to 2015. The Patient Protection and Affordable Care Act was signed into law by President Obama in March 2010, in the middle of this time period.

This graph is just the worker's contribution to the health insurance premium. In addition, the deductibles and coinsurance payments in employer-provided health insurance are on the rise.

The average annual deductible is similar to last year ($1,217), but has increased from $917 in 2010. ... Looking at the increase in deductible amounts over time does not capture the full impact for workers because the share of covered workers in plans with a general annual deductible also has increased significantly, from 55% in 2006 to 70% in 2010 to 81% in 2015. If we look at the change in deductible amounts for all covered workers (assigning a zero value to workers in plans with no deductible), we can look at the impact of both trends together. Using this approach, the average deductible for all covered workers in 2015 is $1,077, up 67% from $646 in 2010 and 255% from $303 in 2006. A large majority of workers also have to pay a portion of the cost of physician office visits. Almost 68% of covered workers pay a copayment (a fixed dollar amount) for office visits with a primary care or specialist physician, in addition to any general annual deductible their plan may have. Smaller shares of workers pay coinsurance (a percentage of the covered amount) for primary care office visits (23%) or specialty care visits (24%). For in-network office visits, covered workers with a copayment pay an average of $24 for primary care and $37 for specialty care. For covered workers with coinsurance, the average coinsurance for office visits is 18% for primary and
19% for specialty care. While the survey collects information only on in-network cost sharing, it is generally understood that out-of-network cost sharing is higher.
Here's a figure showing the rise over time in share of employer-provided health insurance plans with a substantial deductible.

Of course, the figures given here are national averages, and in a big country, there will be variation around these average. Those who want details on details by large and small firms, type of health care provider (preferred provider organizations, HMOs, high-deductible plans, and others), geography, and other factors can consult the report. That said, here are two of my own takeaway points.

1) For a lot of middle-income workers, the cost of employer-paid health insurance at more than $17,000 per year for family coverage is a big part of your overall compensation--much larger than many people realize. It's also not something that employers do out of the goodness of their hearts. Employers look at what overall compensation they are willing to pay, and when more of it comes in the form of health insurance premiums, less is available for take-home pay.

2) The rise in direct employee contributions to their own health care costs--contributing to the insurance premium, along with deductible and coinsurance--is enormously annoying to me as a patient and a father. On the other side, the economist in me recognizes that cost-sharing plays a useful function, and there is a strong body of evidence showing that when patients face a modest degree of cost-sharing, they use substantially fewer health care services and their health status doesn't seem to be any worse. But at some point--and maybe some people are already reaching that point--high deductibles and copayments could potentially lead people to postpone needed care.

As I've pointed out on this blog in the past, the prevalence of employer-provided health insurance in the US economy is an historical accident, dating back to a time in World War II when wage and price controls were in effect--but employers were allowed to offer a raise by providing health insurance coverage to employees. The amount that employers spend on health insurance for employees is not counted as income to the employees, and the US government estimates that excluding this form of compensation from the income tax costs the government more than $200 billion per year. Moreover, the percentage of firms providing employer-provided health insurance--especially among mid-sized and small firms--seems to be declining slowly over time.
But almost all large employers do provide health insurance benefits,and seem likely to do so into the future. A fair understanding of the US health insurance market and health care policy needs to face up to the social costs and tradeoffs, and not just the benefits, of employer-provided insurance.

Thursday, September 24, 2015

Wage Inequality Across US Metropolitan Areas

Some US urban areas in their level of wage inequality, and in how the level of wage inequality has been changing over time. J. Chris Cunningham provides some data in "Measuring wage inequality within and across U.S. metropolitan areas, 2003–13," which appears in the September 2015 issue of the Monthly Labor Review (which is published by the US Bureau of Labor Statistics).

For his measure of wage inequality, Cunningham focuses on what is sometimes called the 90/10 ratio, which is the ratio between the income of the person in the 90th percentile of the wage distribution to the person in the 10th percentile of the wage distribution. "The most recent data show that the 90th-percentile annual wage in the United States for all occupations combined was $88,330 in 2013, and the 10th-percentile wage was $18,190. In other words, the highest paid 10 percent of wage earners in the United States earned at least $88,330 per year, while the lowest paid 10 percent earned less than $18,190 per year. Therefore, by this measure, the “90–10” ratio in the United States was 4.86 in 2013, compared with 4.54 in 2003, an increase of about 7 percent over that 10-year period."

How does this measure of inequality differ across metro areas? The most unequal metropolitan areas, where the 90/10 ratio is above 5.5, are shown by reddish shading in the map below. They are heavily concentrates from Washington, DC to Boston on the east coast, and then in the San Francisco/San Jose region on the west coast.

What are some of the factors correlated with higher levels of wage inequality? Larger cities tend to have greater wage inequality. Also, areas with a higher proportion of certain high-paying occupations tend to have greater wage inequality, including "management, business and financial operations, computer and mathematical, architecture and engineering; life, physical, and social science; legal; arts, design, entertainment, sports, and media; and healthcare practitioners and technical. Here's a list of the top 10 and bottom 10 cities according to the 90/10 measure of wage inequality--with a breakdown of some of these higher-wage occupations in these urban areas. 

I don't have have any especially deep point to make about these differences between cities. The list of high-inequality urban areas perhaps helps to explain why the Occupy movement was especially prominent in eastern cities and in the San Francisco Bay Area. It's useful to remember that both the issues created by inequality, and the consequences of taking steps to address inequality, will not be perceived or felt equally across urban areas.