The health care world is full of colloquial terms like value or affordability that mean different things to different stakeholders. Often how you define them and what you think of them depends on your place within the overall health ecosystem. It’s a classic case of “beauty is in the eye of the beholder.” One such term is effectiveness. We all want medical treatment to be effective, and understanding what that word means to different stakeholders is critical to having a productive dialogue — whether you are in a hospital, a research center, or on the floor of Congress. Here is a brief history of the concept of effectiveness, a few insights about its many meanings, and the debates we can expect to have about it in the years ahead.
Over the past 70 years, there have been a series of efforts to differentiate between effective and ineffective care, each of which has relied on a different, but overlapping set of terms: (1) determinations of medical necessity by insurance companies, which began in the 1940s; (2) assessing the “appropriateness” of care, popularized by the RAND corporation in the mid-1980s; (3) evidence-based medicine, a movement that began in the late-1980s/early 1990s; and (4) value-based health care, which is widely discussed today. A better understanding of these trends and how they are related gives stakeholders common ground and a historical perspective for today’s conversations about effectiveness in health care.
Medical necessity
In the 1940s, health insurers began using the term medical necessity to differentiate between care they would cover and care they would not. This term refers to care that is reasonable, necessary, and appropriate based on clinical standards of care for otherwise covered services. It was not until the 1960s that insurers started inserting specific definitions of medical necessity into contract language. Their aim was to avoid paying for services that were unnecessary, excessive, or experimental — services that, by their definition, were not effective.
Because patients’ perceptions of effectiveness vary greatly, this initiative was met with significant consumer backlash. In the decades since the introduction of payment based on effectiveness, there have been several high profile cases of patients claiming an insurance company denied them access to a treatment of service that really was necessary.
With the 2012 launch of the Choosing Wisely campaign by the ABIM Foundation and their partner, Consumer Reports, we have entered an exciting new era of building some common ground. While consumers and payers may still disagree on what is medically necessary, the Choosing Wisely campaign identifies procedures that could be considered unnecessary for specific conditions, given a paucity of evidence. The campaign puts out lists of “Thing Providers and Patients Should Question” identifying tests and procedures that may not be beneficial for certain conditions and patients. The lists are designed in partnership with medical specialty societies to help educate and empower consumers and to enhance health care services.
Appropriateness
Like medically necessary, appropriate care (Note 1) does not have just one definition. Appropriateness can be used to refer to care and/or the setting in which the service is delivered. Most health care professionals use the phrase to refer to care that is suited for each individual patient and his or her specific conditions.
The term appropriateness can also be used to describe health care services that improve patient outcomes and are consistent with patients’ goals, or care that has an expected health benefit (measured by quality of life and or longevity) that exceeds the negative consequences of the care by a “sufficiently wide margin.” So in layman’s terms, what is appropriate for patient Sue may not be appropriate for patient Tommy, if Sue has specific goals and/or is expected to have a better quality of life as a result of the treatment.
In the mid-1980s, researchers at the RAND Corporation sought to develop a method for determining the appropriateness of care. The RAND/UCLA Appropriateness Method, as it came to be known, assessed the appropriateness of particular procedures, such as coronary angiography, based on expected health benefits and harms. The method sought to combine the best available scientific evidence with the collective judgment of experts.
Evidence-Based Medicine
The standard definition of evidence-based medicine is the application of current best evidence, as determined by valid research, in clinical decision-making. Health care professionals tend to agree that experimental studies and randomized controlled trials yield the strongest evidence for clinical care, but observational studies can also inform these decisions. Some groups believe evidence-based care includes more than the best existing evidence: it should also incorporate clinical expertise and patients’ values and expectations.
Like its sister term medical necessity, the term evidence-based medicine has always been fraught with political controversy. Archie Cochrane, a staunch supporter of randomized controlled trials, first introduced the concept in his 1971 monograph “Effectiveness and Efficiency.” Evidence-Based Medicine gained prominence in the late 1980s and early 1990s and Congress created the Agency for Health Care Policy and Research in 1989, with a mandate to produce evidence-based, clinical-practice guidelines to help physicians sort through the conflicting data regarding the treatment of low back pain and other common conditions. The agency’s findings proved controversial; in subsequent years, others, including professional societies, took a more central role in defining clinical best practice.
But once the controversy started it really never stopped. Remember the huge debate that erupted over the creation of the Patient-Centered Outcomes Research Institute created by the Affordable Care Act? Some critics argued this new agency, created to help do comparative clinical effectiveness research, would lead to health care rationing — and even death panels. If history teaches us anything in health care policy, it is that mentioning death is an effective tactic to engage the American public in the debate in a negative way.
Value-Based Care
While more of a cousin than a sister to the terms we discussed above, value-based care is still closely related to effectiveness in health care. The commonly used definition of value is patient health outcomes per dollar spent. Ideally, this means services that improve health at a reasonable cost. In principle, when value improves, providers, patients, payers, and suppliers all reap the rewards. Patients get safe, appropriate, and effective care at an affordable rate with dependable results. Health systems in turn combine rigorous, evidence-based medicine and proven treatments that consider the wishes and preferences of those for whom they care.
Of course, how we conceptualize and define all the terms discussed above affect how we think about and define value-based care, and how we involve the appropriate stakeholders in creating a common, working definition.
No matter how you define it, the term is now part of the vernacular in the health policy community. In 2015, the U.S. Department of Health and Human Services announced its intention to shift Medicare payments from volume-based fee-for-service to payments based on value. This new focus on value-based payment, according to the Centers for Medicare and Medicaid Services, involves two shifts: (1) increasing accountability for both quality and total cost of care and (2) a greater focus on population health management as opposed to payment for specific services.
The Department’s goal is to tie 50 percent of traditional, fee-for-service Medicare payments to value by the end of 2018. To achieve this goal, CMS will use alternative payment models, including Accountable Care Organizations (ACOs) and bundled payment arrangements. In short, “if the care you provide isn’t effective, we are not going to pay you for it.” I’d wager this conversation will be at the forefront of the debate for the next several decades.
The Devil is in the Details
Thought leaders in health care have been trying to define and come to agreement on the terms discussed above for decades. The good news is that there is already a subtle improvement in the dialogue. Thirty years ago, some of the conversations taking place today would have been unthinkable — particularly discussions about unneeded tests and procedures, or refusing to pay physicians for providing inappropriate care.
Ultimately, people want care that makes them healthier, and whenever possible, services and treatments that are the most affordable. In short, care that is effective and high value. But the devil is in the details when it comes to defining what those terms mean.
To recap:
- Medical necessity is based on evidence, but what is medically necessary can depend on the specifics of the individual patient; likewise, appropriate care can vary based on individual patient needs and preferences.
- What constitutes evidence-based care may be based on pure research but also longitudinal observation of patients.
- Discussions around value-based payment almost always involve paying for care that is necessary or appropriate, so a common understanding of those terms is needed to have a productive dialogue about what value means.
Given the number of lives—and resources—at stake, coming to a common understanding of value-based care requires our immediate attention. But to do that work, we need to understand what constitutes value, to whom, and under what circumstances. A healthy debate can begin by bringing together diverse stakeholders and by understanding that as with beauty, effectiveness lies in the eye of the beholder.
Note 1
The term medical necessity in contracts, legislation, and the law generally refers to populations or sub-populations of patients; in litigation it is often used in reference to an individual when there may have been a denial of coverage.
from Health Affairs BlogHealth Affairs Blog http://ift.tt/2aI0c21
No comments:
Post a Comment