In July of 2016, the Centers for Medicare and Medicaid Services (CMS) implemented a new overall star rating system on the Hospital Compare website. Almost all hospitals are now rated between one and five stars (higher is better) depending on performance on a series of quality metrics. CMS' intent for the new star ratings is to "help patients and families learn about hospital quality, compare facilities side by side, and ask important questions about care quality."
However, patients and families will find the newly released star ratings from CMS confusing at best and misleading at worst. Many well-known hospitals that are highly rated by other ranking systems are absent from the list of 102 hospitals that received five stars. Many names on the list may not be familiar. Do these five-star hospitals represent the nation's best in terms of quality? A closer look at the quality performance of five-star hospitals brings into question the CMS results.
A Flawed Methodology
The star rating methodology intends to rate all hospitals based on 64 quality measures from seven quality domains and to make domain weight reflect relative importance of a quality domain to patients. CMS has assigned greater weight (66 percent of the overall star rating) to the three domains—mortality, readmission, and patient safety—that measure clinical outcomes such as whether a patient dies, contracts a hospital-acquired infection, or is readmitted to the hospital within 30 days of an inpatient stay. Other domains that focus on processes such as efficient use of medical imaging and timeliness of care are given less weight.
Nonetheless, CMS calculated and published star ratings for hospitals that had sufficient data to report on as few as three quality domains, including some hospitals that only had data from one clinical outcome domain. The fewer the clinical outcome domains a hospital reports, the less that hospital's overall star rating is actually tied to performance on patient outcomes.
Based on the July 2016 release of hospital compare data, 40 percent of the 102 hospitals that received a five-star rating did not have the minimum data necessary to report on either mortality or readmissions. Of those, 20 performed at only the national average on patient safety. Are all the shining stars here an accurate representation of quality?
This inconsistent value matrix leads to a wide performance divide among five-star hospitals. As shown in Exhibit 1, among the 30 five-star hospitals that had sufficient data to report only the minimum number of quality domains required—three out of a total of seven (red bars)—14 had performed higher than the national average on only one quality domain, 15 performed above average on two domains, and a single hospital excelled at those three quality domains. Hospitals that reported all seven quality domains (yellow bars), however, needed at least three quality domains with above national average performance to receive a five-star rating.
Exhibit 1: Difference in quality performance among five-star hospitals
Source: AAMC Analysis of the July 2016 release of hospital star ratings data available from Hospital Compare website.
What's even more startling is the quality performance comparison of four- and five-star hospitals. Consider the two Midwest hospitals shown in Exhibit 2. In the quality domains for which both hospitals' data was available, Hospital A's performance was either similar to or better than that of Hospital B. In the quality domains for which Hospital B could not provide adequate data, Hospital A was either above or at the national average. Yet, Hospital A received a four-star rating, while Hospital B was awarded five stars.
Exhibit 2: Comparison of quality performance between a five-star hospital and a four-star hospital
Source: AAMC Analysis of the July 2016 release of hospital star ratings data available from Hospital Compare website.
This disconnect is a result of the fact that when a hospital has insufficient data to report on one or more quality domains, the "weights" of those missing domains are reallocated to the domains for which there is sufficient data. In the case of Hospital B, domain weights of the four missing quality domains were simply allocated to the three domains that were available. After reweighting, the one and only clinical outcome domain (in this case, safety of care) accounts for less than half of Hospital B's overall star rating. This is simply inconsistent with the rating system's original intention of making performance on clinical outcomes the predominant influence on the overall rating.
Another way to understand reweighting is to view it as an approach to impute missing values for the missing domains. In the case of Hospital B, performance at or above the national average on just three available domains was projected onto the four domains with missing values. Essentially, not having sufficient data with these methods may very well have given Hospital B an advantage by allowing it to earn a five-star rating while appearing no different than other hospitals that reported all seven quality domains.
Teaching Hospitals
Major teaching hospitals, a group that includes many of the nation's most renowned hospitals, provide comprehensive services and thus are able to report on a majority of quality domains. In fact, 80 percent of major teaching hospitals reported on all seven quality domains. To receive ratings with more stars, these hospitals have to meet a higher standard than hospitals with fewer reported quality domains because of their narrower service areas and less diverse patient populations. Not only do major teaching hospitals need to achieve performance better than the national average in more quality domains, but their overall star ratings will also be heavily tied to the outcomes of their clinical services (e.g., mortality) instead of processes of care delivery (e.g., efficient use of medical imaging). While all improvement efforts can be challenging, we believe that it takes more to improve clinical outcomes—for example, saving a patient's life—than to improve delivery process, such as reducing the use of imaging.
Many major teaching hospitals also serve a large population of patients who live in poverty and economically deprived neighborhoods. Study after study links low socioeconomic status (SES) to increased risk of readmission after inpatient discharge. Unfortunately, the readmission risk associated with patient SES is not currently adjusted for in quality domains like readmissions. That 70 percent of the major teaching hospitals with the highest share of low SES patients (top quartile) received one or two stars reflects more of the systematic bias in the ratings system.
To provide meaningful information for patients, families, and caregivers about hospital quality, a star ratings system has to make sense. At a minimum, quality performance among hospitals with the same star rating should be consistent. And higher star ratings should reflect better actual quality performance. Unfortunately, the CMS star ratings in their current form fail to meet this basic test and will do more harm than good to patients.
from Health Affairs BlogHealth Affairs Blog http://ift.tt/2f9tSGx
No comments:
Post a Comment