Risk Of Developing Liver Cancer After HCV Treatment

Wednesday, May 11, 2016

Public Reporting Measures Fail to Describe the True Safety of Hospitals

Public Reporting Measures Fail to Describe the True Safety of Hospitals
Study finds only one measure out of 21 to be valid
Release Date: May 10, 2016
 
Common measures used by government agencies and public rankings to rate the safety of hospitals do not accurately capture the quality of care provided, new research from the Johns Hopkins Armstrong Institute for Patient Safety and Quality suggests.
 
The findings, published in the journal Medical Care, found only one measure out of 21 met the scientific criteria for being considered a true indicator of hospital safety. The measures evaluated in the study are used by several public rating systems, including U.S. News and World Report’s Best Hospitals, Leapfrog’s Hospital Safety Score, and the Center for Medicare and Medicaid Services’ (CMS’) Star Ratings. The Johns Hopkins researchers say their study suggests further analysis of these measures is needed to ensure the information provided to patients about hospitals informs, rather than misguides, their decisions about where to seek care.
 
“These measures have the ability to misinform patients, misclassify hospitals, misapply financial data and cause unwarranted reputational harm to hospitals,” says Bradford Winters, M.D., Ph.D., associate professor of anesthesiology and critical care medicine at Johns Hopkins and lead study author. “If the measures don’t hold up to the latest science, then we need to re-evaluate whether we should be using them to compare hospitals.”
 
Hospitals have reported their performance on quality-of-care measures publicly for years in an effort to answer the growing demand for transparency in health care. Several report performance using measures created by the Agency for Health Care Research and Quality (AHRQ) and CMS more than 10 years ago. Known as patient safety indicators (PSIs) and hospital-acquired conditions (HACs), these measures use billing data input from hospital administrators, rather than clinical data obtained from patient medical records. The result can be extreme differences in how medical errors are coded from one hospital to another.
 
“The variation in coding severely limits our ability to count safety events and draw conclusions about the quality of care between hospitals,” says Peter Pronovost, M.D., Ph.D., another study author and director of the Johns Hopkins Armstrong Institute for Patient Safety and Quality. “Patients should have measures that reflect how well we care for patients, not how well we code that care.”
The researchers analyzed 19 studies conducted between 1990 and 2015 that directly addressed the validity of HACs and PSI measures, as well as information from CMS, the AHRQ and the Maryland Health Services Cost Review Commission’s websites. Errors listed in medical records were compared to billing codes found in administrative databases. If the medical record and the administrative database matched 80 percent of the time, the measure was considered a realistic portrayal of hospital performance.
 
Of the 21 measures developed by the AHRQ and CMS, 16 had insufficient data and could not be evaluated for their validity. Five measures contained enough information to be considered for the analysis.  Only one measure—PSI 15, which measures accidental punctures or lacerations obtained during surgery—met the researchers’ criteria to be considered valid.
 
“Patients and payers deserve valid measures of the quality and safety of care,” says Pronovost, who is also Johns Hopkins Medicine’s senior vice president for patient safety and quality. “Despite their broad use in pay for performance and public reporting, these measures no longer represent the gold standard for quality, and their continued use should be reconsidered.”
The researchers say they hope their work will lead to reform and encourage public rating systems to use measures that are based in clinical rather than billing data.
 
Pronovost recently outlined additional fixes that could be implemented by the rating community in a commentary published in the April 2016 issue of JAMA. Designating a separate reporting entity to establish standards for data collection and making funds available for systems engineering research were listed as possible starting points by Pronovost and his co-author, Ashish Jha from Harvard.
This work was supported by internal funds from the Johns Hopkins Armstrong Institute for Patient Safety and Quality. Established in 2011, the Armstrong Institute works to improve clinical outcomes while reducing waste in health care delivery both at Johns Hopkins and around the world. Led by Pronovost, the institute develops and tests solutions in safety and quality improvement that can then be shared at the regional, national and global levels. Using a scientific approach to improvement, the Armstrong Institute employs robust measures that can be broadly disseminated and sustained.
 
 
 
 

No comments:

Post a Comment