Wednesday, February 5, 2014

Judging Hospital Quality and Narrow Networks––Barking Up the Wrong Tree?

It isn't news for anyone to suggest the most expensive hospitals may not be worth the money. 

A recent paper published in the journal Health Affairs, "Understanding Differences Between High- and Low-Price Hospitals: Implications For Efforts To Rein In Costs" makes some excellent points regarding the pricing power of the largest hospitals and the wide variation in local prices. But then it attempts to make some comparisons between cost and quality of care concluding that "the high-priced hospitals' performance on outcome-based quality measures was mixed.".

Looking at the analysis suggesting that cost doesn't necessarily equal quality and comparing it to some real life situations I've seen leads me to believe these studies are missing something really big.

Rating the best hospitals has always been about reputation as well as quantitative measurements of things like complications and morbidity.

These quantitative measurements are valid. But they are far from the whole picture because such ratings are largely based upon short-term mortality and complication rates. For example, a hospital's rating in the popular USNews and World Report Hospital Ranking partly depends upon the mortality rate immediately after the procedure: "A hospital's success at keeping people alive was judged by comparing the number of Medicare patients with certain conditions who died within 30 days of admission in 2009, 2010, and 2011 with the number expected to die given the severity of the illness."

The Health Affairs article points out that CMS tries to go further than the largely reputation-based ratings in the USNews survey by tracking more Medicare outcomes.  The article points to the "mixed" outcome-based hospital quality measures among the hospitals with the best reputations: "They preformed worse than the low-price hospitals on measures of excess readmissions and on patient-safety indicators, including postsurgical deaths and complications."

That got me thinking. As a health care consumer am I most interested in short-term complications and readmission rates or am I interested in getting cured of whatever I have?

And, yes I certainly want to survive the first 30 days and I don't want to acquire a hospital-based infection.

But, if I am faced with a prostate cancer diagnosis, I'm not about to be focused on the 30-day life expectancy after surgery or the readmission rate.

What I would be focused on is getting to a physician specialist that can offer me the best long-term outlook free of the dreaded complications. I would then be prone to trust whatever hospital that physician used.

The recent hullabaloo over narrow networks and whether a more expensive hospital is worth it or not seems to me to miss the bigger point.

Aren't consumers focused on the long-term cure for whatever they have? Just exactly what does that have to do with 30-day mortality rates? It seems to me the real question is, How many of these people survived years not a month?

And, there is hardly a difference between these short-term measurements. The Health Affairs paper found that 14.7% of the Medicare beneficiaries treated in a low priced facility didn't survive the first 30 days after a heart attack while 14.8% of those treated in a high priced facility didn't survive.

More, does the hospital matter or is the surgeon or oncologist really the choice that matters? It seems to me that the hospital tends to follow the choice of physician.

So, why then is there all of this almost exclusive focus on a hospital's short-term quality results like its readmit rate, or its 30-day mortality rate?

These are legitimate measurements. They drive hospitals to be more careful and that leads to lower mortality and less suffering. But aren't the most important criteria over where my best chances for a cure or at least a higher quality of life are?

With a big majority of the lowest cost Obamacare exchange plans offering narrow provider networks, understanding the quality of the providers in them compared to the rest of the market is a bigger deal than ever. These studies would seem to suggest narrow networks don't matter as long as a hospital's short-term quality data is comparable––that there isn't a measurable value in going to what has historically been thought by consumers to be the "best" place.

Baloney.

Anyone who has been through a serious life threatening illness themselves or with a loved one and has had to navigate multiple providers to finally get to a good resolution will tell you there is a ton of difference over where you get your health care.

These new narrow networks that have cropped up in the health insurance exchanges are not the same as the narrow "high performing" networks we have seen in the recent past that attempt to identify the high quality providers and contract with them with the objective of achieving both higher quality and lower cost.

These new narrow networks were created when health plans came up with a low reimbursement schedule––often close to the Medicaid fee schedule––and then mailed it out to the provider community and waited to see which of the providers would sign-up.

Not the way I would choose a doctor or hospital.

I will suggest that what we need to see are health care quality measurements that track the way people really think about their health care choices.

Who is going to get me well?

Researchers tend to scoff at surveys like the USNews survey that measures a hospital's reputation among physician specialists. But without hard data on long-term outcome, who better to ask?

The Health Affairs article's bigger point is that there is an incredibly wide variation in pricing power among local providers. But to really be able to direct patient flow to the provider with the best combination of cost and quality, I will suggest will take data a lot more powerful than the short-term measurements that are available today.

In the meantime, the next time someone tries to tell you the care is just as good in a narrow network, ask them where they go.