Several weeks ago on Health Affairs Blog, we wrote an article about the role of poor research design in policy formulation. As part of that article, we praised RAND's 2013 Health Affairs article for reporting that their 2005 Health Affairs article over-stated the benefits of health IT (HIT). That 2005 report, it emerges, was industry sponsored, profoundly influential, and contributed to a trillion dollar societal investment with unproven economic benefits.
In response, the vice president/director of RAND Health, Jeffrey Wasserman, commented that the "two articles were not that different." Unfortunately, this reinterpretation further confuses matters and undercuts RAND's laudatory effort at correcting the research record.
Mr. Wasserman's comment is also at odds with the conclusions of the Congressional Budget Office (CBO), 2008, HIT experts (Black AD et al., 2011), and major news outlets like the New York Times (Abelson R & Creswell J, 2013). Our larger concern, however, is not about the misleading interpretation of research, but about how weak research designs and economic interest can fuel non-reproducible findings, which in turn can influence unwise and costly health care policy decisions.
RAND's 2005 article exemplifies these dangers; RAND's 2013 article effectively acknowledges this and helps bring into focus how to design research that will yield accurate results and aid in wise policymaking. By blurring the distinction between the two articles, Mr. Wasserman's comment undermines the effort to effectively determine what works and doesn't work in health care.
What Happened In The Case Of HIT?
In 2005, RAND researchers, based on a study paid for by Cerner, GE Healthcare and other vendors, published an article in Health Affairs that said — quoting from RAND's own press release:
Efficiency savings. "If most hospitals and doctors' offices adopted HIT, the potential efficiency savings for both inpatient and outpatient care could average over $77 billion per year."
In fact, extant data do not support such far-reaching claims and the report made assumptions that proved to be untrue. (Black AD et al., 2011; CBO, 2008; Soumerai SB & Koppel R, 2015; Soumerai et al., 2015.)
Increased safety. "Increased safety results largely from the alerts and reminders generated by Computerized Physician Order Entry systems for medications….If all hospitals had an HIT system including Computerized Physician Order Entry, around 200,000 adverse drug events could be eliminated each year…"
Actually, the vast majority of such alerts and reminders are ignored. (Koppel R, 2011 and Koppel R & Gordon S, 2012). And the 2005 researchers made no estimate of medical errors or harm associated with the use of HIT, and actually rejected all studies that did not show favorable outcomes from EHR implementations — a fact noted in the CBO evaluation (2008). (CBO, 2008 and Schiff GD et al., 2015.)
To be fair, RAND indicated in a much ignored section of the article that their work assumed everyone who would benefit from preventive care would receive it, and that reaching that level of efficiency might take 15 years — an unattainable goal.
But RAND was not so subtle in their recommendations to encourage sales of HIT. Their bold headline reads:
The Government Should Act Now: "Government intervention is needed to overcome market obstacles …."
In any event, our statement on this Blog focused on the use of that report by the Office of the National Coordinator for Health IT (ONC), not on the RAND report. (We incorrectly stated that the Congressional Budget Office touted the RAND report. In fact, the CBO correctly noted its dubious assumptions and inflated estimates).
Now to the praiseworthy 2013 work: RAND's Kellerman and Jones' 2013 follow-up Health Affairs article reviewed the 2005 work and noted its heroic assumptions and inflated estimates of return-on-investment and savings. As stated in their 2013 article:
"A team of RAND Corporation researchers projected in 2005 that rapid adoption of health…IT could save the United States more than $81 billion annually. …The optimistic predictions of Hillestad and colleagues in their 2005 analysis of the potential benefits of health IT have not yet come to pass."
Indeed, as acknowledged in Kellerman and Jones' "The Unfulfilled Promises of Nationwide Health IT- How Can We Do Better?" 1) The cost of health care increased $800 billion, rather than decreasing; and 2) It's now clear that even if all of the unrealistic assumptions in the 2005 article had miraculously come to pass, then the optimistic findings might have been less overwhelmingly wrong.
Nevertheless, the ONC aggressively promoted the 2005 industry-supported article; and the HIT industry received many billions in sales for these systems. RAND's re-evaluation of the 2005 report eight years later was a valuable service – even if it appeared too late to stop the steamroller of HITECH and Meaningful Use regulations. Equally notable, it's rare that an organization will revisit a work that was so influential to national policy and investments.
The Importance Of Well-Designed Research For Policymakers
And this is the main issue in our original August 31 post: the use of flawed research as a central element in the passage of HITECH six years ago and, more generally, the role of inadequate research design on essential health care decisions. As Soumerai et al. demonstrate in their CDC publication, we are obligated to understand how we determine what works and doesn't work. Research design is the most basic requirement for the credibility of the findings.
This does not negate the importance of high quality data collection and analysis; while essential, they are meaningless if the design of the process is so flawed that multiple uncontrollable biases threaten the reliability of the findings. To increase the trustworthiness of research on health policies, journals should consider rejecting the weakest designs, such as those excluded from the Cochrane Collaboration's systematic evidence reviews.
Our health care costs are now approaching almost a fifth of our GDP. Research is critical to our knowing what and where to invest and where not to invest. Overstated or unsubstantiated findings are not helpful for translating evidence into effective policy, and contribute to public, policymaker and media perceptions (often accurate) of unreliable, flip-flopping research findings Health care research must be based on the strongest feasible designs, and not on protocols that affirm our biases, support hidden funding sources, or obscure wise policy choices.
The 2005 RAND study was an example of the latter type of protocols; by correcting the research record, RAND's 2013 article moved us along the road to ensuring that our research supports good policy decisions rather than thwarting them. Rather than minimizing the difference between these two articles, as Mr. Wasserman does, we should seek to understand them and use them to guide our research designs going forward.
Editor's Note
Dr. Ross Koppel worked with the second author of the RAND 2013 article, Dr. Spencer Jones. Jones was a researcher and coauthor on a project supported by DHHS' Agency for Healthcare Research and Quality (AHRQ) that addressed unintended consequences of implementing EHRs. He was paid a consulting fee via RAND for his work with Dr. Spencer on this project.
from Health Affairs Blog http://ift.tt/1ObQcbY
No comments:
Post a Comment