My new book, “Myth Busters: Why Health Reform Always Goes Awry,” chronicles 50 tears of failed attempts to reform health care in the United States.
When I say failed, I don’t just mean failed to do what was promised, or failed to live up to expectations, but failed disastrously, making conditions worse than they were before the “reform” was enacted, and only after spending billions of taxpayer dollars, disrupting the entire health-care system, putting patient lives at risk, and building a mountain of bureaucracy without adding anything to actual patient care.
Below I highlight just a few of these failures.
It All Began with Roemer’s Law
It all began with a concept known as “Roemer’s Law.” If you ask anyone who has studied health economics or health policy in the last 50 years, “What is Roemer’s Law?” each will be able to tell you in an instant: “That means a built bed is a filled bed.”
Milton Roemer, MD, was a researcher and professor, mostly at the University of California-Los Angeles, who spent a lifetime (he died in 2001) advocating for national health systems around the world. He was involved in creating the World Health Organization in 1951 and Saskatchewan’s provincial single-payer system in 1953. His “law” was based on a single study he did in 1959 that found a correlation between the number of hospital beds per person and the rate of hospital days used per person. That’s it. That is the whole basis for “Roemer’s Law.”
“A built bed is a filled bed.” This little bumper sticker slogan has been the foundation of American health policy for 60 years. Hundreds of laws, massive programs, thousands of regulations at the federal, state, and local levels of government, all have been based on this slogan. It is the source of such concepts as “provider-induced demand,” and has resulted in centralized health planning, Certificate of Need regulations, managed care, and everything else currently on the table. Yet this “law” is both verifiably untrue and illogical.
There is a kernel of truth to it. When third-party payers pick up the tab, the usual tension between buyer and seller doesn’t exist. The buyer has no reason to resist excessive prices if someone else pays the bill.
But the believers in Roemer’s Law take that core idea to Alice-In-Wonderland proportions. They argue that, therefore, whenever a health-care provider wants to make more money, it simply has to sell more — more capacity equals more sales without end. So, the only way to reduce this endless consumption is to limit the capacity — place strict controls on the availability of services. But the notion fails for several reasons:
- People are not eager to enter a hospital, even when the cost is zero. Hospitals are miserable places to spend time. Folks are not lined up around the corner just waiting for an opportunity to be admitted to the hospital if only there were more beds available.
- If the “law” were true, hospital occupancy should approach 100 percent at all times. In fact, occupancy rates vary considerably over time and from place to place. Some years they are up, other years they are down. For example, from 1970 to 2000, national hospital occupancy rates dropped from 77 percent to 67 percent, according to the National Center for Health Statistics. Apparently one-third of “built beds” were not “filled beds” during this period. In 2005 occupancy rates varied from 92 percent in Delaware to 53 percent in Idaho.
So Roemer’s Law is statistically and behaviorally untrue, yet it has been the basis for virtually all of the health policy initiatives of the last 60 years, including Certificate of Need, national health planning, hospital rate-setting, Health Maintenance Organizations, and more recently, Accountable Care Organizations, pay-for-performance, and comparative effectiveness research.
Then There’s National Health Planning
One of the first consequences of Roemer’s Law was the enactment of National Health Planning in 1974. No one seems to have anticipated some of the obvious effects of the enactment of Medicare and Medicaid in 1965—in particular, the impact of a huge infusion of public dollars into the health-care system. In 1965, state and local governments spent $4.3 billion on health care, while the federal government spent only $2.9 billion. By 1970, state expenditures would rise to $9.9 billion, and federal spending would reach $17.7 billion — more than six times what Americans had spent five years earlier.
This, naturally, resulted in enormous health care inflation. The rate of annual increase in health-care spending was very close to the increase in Gross Domestic Product in 1965 and 1966, but immediately it began to rise at double the rate.
Rate of Increase in Gross Domestic Product and National Health Expenditures as a percentage from previous year, 1965 – 1970
Source: Katherine R. Levit, et.al., “National Health Expenditures, 1990,” Health Care Financing Review, Fall, 1991.
Stark panic set in among policy makers. The then-HCFA (now Centers for Medicaid Services) administrator, Stuart Altman, who oversaw the Medicare and Medicaid programs, was interviewed in 2001, saying: “When I was 32 years old, I became the chief regulator in this country for health care. At that point, we were spending about 7.5 percent of our GDP on health care. The prevailing wisdom was that we were spending too much, and that if we hit 8 percent, our system would collapse.”
The first reaction was national wage and price controls imposed by President Nixon in August 1971. Naturally, these failed and were removed for most of the economy in January 1973, but retained for health care until April 30, 1974. This was immediately followed by the National Health Planning and Resources Development Act of 1974, which required states to establish elaborate bureaucracies to control the growth of hospitals and other health-care facilities.
So, we had a massive infusion of new money into the health-care system, which raised demand for services, which resulted in an astonishing increase in prices. How did the “health policy community” respond? They enacted a massive and mandatory health planning system, which was intended to reduce the supply of services — precisely the wrong reaction at a time of high inflation due to rising demand.
Not surprisingly, this approach did not work either, and health inflation continued unabated. The main law was repealed in 1982, but billions of dollars and years of effort were wasted on an idea that never made any sense in the first place. And no one was ever held to account.
Next, We Tried Hospital Rate-Setting
This was only the first of a series of equally catastrophic efforts. Next up was state-based hospital rate setting. With the repeal of the Health Planning Act in 1982, a lot of health policy wonks found themselves out of work. They had to come up with something new to earn a living and justify their PhDs.
They had learned a very expensive lesson — that reducing supply at a time of growing demand is a bad idea because it results in rising prices. So, they decided they would try controlling prices again. The initiative was based largely on a study published in the New England Journal of Medicine in 1980, “Hospital Cost Inflation under State Rate Setting Programs,” by Brian Biles, Carl Schramm, and Graham Atkinson that looked at a handful of states that had already adopted rate setting.
The article concluded, “the average annual rate of increase in hospital costs in (the six) rate-setting states has been 11.2 per cent, as compared with an average annual rate of increase of 14.3 per cent in states without such programs.” In 1986 the authors updated that information through 1984 in an article in Health Affairs, “Controlling Hospital Cost Inflation: New Perspectives on State Rate Setting.”
It is interesting that these were state-based systems. With Ronald Reagan as president, it was unlikely that private-sector price controls could have become federal law. But the health policy community is nothing if not inventive. When blocked at the federal level, it will switch to the states to accomplish its goals.
Of course, the six rate-setting states (Connecticut, Maryland, Massachusetts, New Jersey, New York, and Washington) cited in the original article started out as probably the most expensive and wasteful states in the nation. That is why they were prompted to adopt these systems in the first place. There was already plenty of fat to be trimmed, which would not be true for other states.
Indeed, the Health Affairs article reported that non-regulated states had a per capita hospital cost of $107.02 in 1972, while the six states that adopted the price controls were spending $135.08 per person. Further, these costs were spread over a much larger hospitalized population in the non-regulated states, which had an admissions rate of 152.8 per thousand in 1972, compared to 131.1 per thousand in the states that adopted regulations.
Usually if a study uses a self-selected sample, researchers look to see what might distinguish the sample from the rest of the population and adjust their findings accordingly. Not so for health policy advocates. who are so eager to push their preferred remedies that they ignore what should be standard techniques of research.
The original study mentioned above completely overlooked many pertinent differences between the six regulated states and the 45 (including DC) non-regulated areas. Obvious differences include that the six states tended to be no-growth or low-growth states, so they had little need for new hospital construction. They also tend to be high Medicaid enrollment states, but also with higher average incomes than the non-regulated states.
As it was, what the research discovered was that after several years of experience:
- The high-cost states remained high cost.
- These states began with lower rates of admissions, and ended with lower rates of admissions.
- They began with lower operating margins and ended with lower operating margins.
Yet somehow the researchers concluded that hospitals in the regulated states were “more efficient” than those in the non-regulated states, though it seems that having fewer admissions at higher costs while maintaining low profit margins would be a slam-dunk argument that these regulated hospitals were anything but more efficient.
Such one-sided research was persuasive enough to the “health policy community” that 30 states ended up adopting similar rate-setting programs during the 1980s. Here is yet another example of an unscrutinized idea that led to yet another failure — but only after more wasted money and time.
In 1997 Health Affairs published another article with a somewhat different tone. This was “Tracking the Demise of State Hospital Rate Setting,” by John McDonough. The article said, “Now, in the mid-1990s, state rate setting is nearly gone; most major systems have been deregulated during the last ten years.” (Ultimately these systems were repealed by every state but Maryland.) The article explained that the growing managed care companies thought they could negotiate lower hospital rates than were available through price controls, and that regulators themselves agreed their rules were “incomprehensible.”
Not mentioned by the author, but fairly obvious, was the reality that once state government becomes responsible for setting prices, it also becomes responsible for assuring the solvency of the facilities. Inefficient facilities are protected from failure, and any decision to close a hospital becomes a political, not an economic, one. A threatened facility can generate enough political support to keep its doors open, even when it makes no economic sense to do so.
In any case, once again an idea was tried and failed miserably at the cost of many billions of dollars and who knows how many lives lost or destroyed. All on an idea that was poorly thought through in the first place.