A new set of rankings compares 3282 US hospitals on metrics such as commitment to equity, inclusion, and community health, along with more traditional measures of patient outcomes, safety, and satisfaction rates.
The Lown Institute, a Brookline, Massachusetts-based nonpartisan think tank today released the first Lown Institute Hospitals Index, which ranked the hospitals on 42 measures.
Vikas Saini, MD, president of the Lown Institute, told Medscape Medical News that hospitals have always needed to be essential partners in their communities, but that need has been acutely apparent in the COVID-19 crisis and these rankings provide a more comprehensive means of judging hospitals’ value.
“Half the ranking is weighted on traditional things like mortality and readmissions,” he said. But the combined look at outcomes, value, and civic leadership shows how “to go from being really good hospitals to being great — and that means great for everybody,” he explained.
Saini said they found that many world-renowned hospitals typically seen at the top of hospital rankings do not fare well in the Lown rankings, largely because “they struggle to advance equity in their communities.”
Conversely, he said, many of the hospitals that do well with advancing equity are challenged from the outcomes end because “many of those patients have a reduced life expectancy from the get-go, before they even walk through the door.”
The Top 10
Number one on the Lown list was JPS Health Network in Fort Worth, Texas. Raters gave it an A+ for civic leadership, and As for value of care and patient outcomes. Still, its performance was mixed, the report shows. Whereas it scored a 100% for “extent of hospital investment in community health,” and was in the 90th percentile for avoiding overuse, for instance, it scored only 51% (three of five stars) in the ratio of executive compensation to worker wages.
Rounding out the top 10, from number two through number 10 were: Marshall Medical Center, Placerville, California; UPMC McKeesport in Pennsylvania; Seton Northwest Hospital, Austin, Texas; Mercy Health-West Hospital, Cincinnati, Ohio; Wellstar Douglas Hospital in Douglasville, Georgia; Providence Portland Medical Center in Oregon; Health Alliance-Clinton Hospital, Leominster, Massachusetts; Memorial Hermann Texas Medical Center, Houston; and Parkland Health and Hospital System in Dallas, Texas.
The index measures inclusivity as the degree to which a hospital is caring for patients of color and patients with lower levels of income or education.
Lown leaders write in a news release that “nonprofit hospitals get billions of dollars in tax breaks every year but vary widely in how much they actually give in community benefits. For example, the Mayo Clinic Hospital in Rochester, Minnesota, benefits from tens of millions in tax breaks each year, but spent less than 0.05 percent of its total expenses on charity care in 2016, landing them near the bottom of the Lown community benefit ranking.”
To determine value of care, the Lown Institute assessed hospitals based on estimates of their rates of overuse, such as head imaging for simple headaches. To determine hospitals’ patient outcomes, the Lown Institute used a validated algorithm called the Risk Stratification Index.
The Index uses data from many sources, including the 100% Medicare claims datasets (MEDPAR and outpatient); Internal Revenue Service data; Healthcare Cost Report Information System administered by the Centers for Medicare & Medicaid Services; Securities and Exchange Commission filings; Bureau of Labor Statistics; and other databases.
American Hospital Association Criticizes Report
Nancy Foster, vice president for quality and patient safety policy at the American Hospital Association, said in a statement provided to Medscape Medical News that the report “uses confusing definitions and makes sweeping conclusions about hospital performance based on an incomplete set of data sources.”
“For example,” Foster said, “most of the index’s assessment of hospital quality and value is based on billing data for Medicare patients. Such data represent only a portion of hospitals’ patient population, and lack important clinical details needed for accurate performance calculation.
“Furthermore, the calculations on which its ‘scores’ are based are far from transparent, lacking important details that would enable independent verification,” she wrote.
Foster added, “The report uses a hodgepodge of composite score, ranking, star ratings, and letter grades that will, at best, confuse consumers and likely mislead them.”
Report Not Intended to Help Patients Choose Care
Saini told Medscape Medical News that the report’s intent was never to help patients make decisions about where to get care.
The purpose is to start a conversation and for hospital and health system leaders, insurers, employers, regulators, and researchers to look at hospitals’ value to their communities in a different way, he said.
Saini added, “It’s also for community leaders — not for where they get their hip done — but for what kind of healthcare matters for them in their communities.”
AHA also took issue with the report’s scope of “community benefit.”
“The report undervalues the vital contribution hospitals make to medical research and professional training,” she said. “The report also fails to recognize that hospitals further subsidize care for low-income and underserved individuals due to chronic underpayments from Medicaid and similar programs.”
Karl Bilimoria, MD, director of the Surgical Outcomes and Quality Improvement Center at Northwestern Medicine in Chicago, Illinois, said he questions the value of the Lown rankings.
He told Medscape Medical News, “I’m not sure there is anything particularly novel here,” though he acknowledged civic leadership was a category he had not seen before.
He said groups that rate hospitals tend to use the same kinds of data, rearranged in a different way.
“If you look at the major rating systems in the country, it turns out there are about 1000 ‘Top 100’ hospitals in the US,” he said.
He added, “Referring doctors have a good sense of where the specialists and novel treatments and technology and experts are, and it probably wouldn’t be the case for all of these [hospitals].”
Bilimoria said without peer review, hospital ranking systems lack thorough evaluation of data, measures, or methods used to create the various rankings and should be considered with caution.
Funding for the rankings comes exclusively from the Lown Institute. Saini, Foster, and Bilimoria have disclosed no relevant financial relationships.
Marcia Frellick is a freelance journalist based in Chicago. She has previously written for the Chicago Tribune and Nurse.com and was an editor at the Chicago Sun-Times, the Cincinnati Enquirer, and the St. Cloud (Minnesota) Times. Follow her on Twitter at @mfrellick.