Using data to reduce preventable harm
We’ve recently released our new hospital harm module and have begun working with hospitals to refine the tool and get feedback to guide future functionality. Here is some background and insight into our initial findings.
According to the Canadian Patient Safety Institute (CPSI), approximately 1 in every 18 hospital stays in Canada involve at least one occurrence of preventable harm, resulting in an average of four extra patient days per occurrence. The World Health Organization estimates between 20% to 40% of all health spending is wasted due to poor-quality care.
This topic has received increased interest of late because of media attention and the fact that patient safety data is now tracked and published by the CPSI and the Canadian Institution of Health Information (CIHI).
The CPSI has defined 31 separate clinical harm groups across 4 major categories:
WHY WE FOCUSED ON HARM ANALYSIS AND OUR APPROACH
Currently, harm data is released to hospitals on an annual basis and there are no self-serve reporting tools. Hospital executives are often frustrated that, while the data indicates there is room for improvement, they don’t have the details required to implement an effective harm reduction strategy. Physicians often push back, citing issues of inadequate risk adjustment and data quality.
In addition to the patient safety and experience considerations, hospital revenue is also negatively affected by harm. If a patient stays in hospital longer when harm occurs (an average of 4 days based on the CPSI/CIHI studies), the hospital incurs costs for those additional days. However, there is typically no extra case weight allocated for that stay to increase the hospital funding allocation (HBAM) and compensate for the additional cost of care. This is one of the core incentives in the new funding methodology: reducing length of stay for typical patients results in less cost for the hospital but does not negatively impact revenue. It is important to understand that the reverse is also true: marginal increases in length of stay lead to more cost but not more revenue. We’ve found this impact has largely been misunderstood by hospitals so it is something that we’ve tried to highlight within our software.
Our primary goal was to provide a comprehensive, yet easy-to-use, analytical framework that supplements the work done by CPSI and CIHI. Specifically, we wanted to enable hospitals to dig deeper into their harm data in a much timelier manner. By reusing the abstracted inpatient data (DAD) already regularly imported into our software at our partner hospitals across the province, we were able to automatically identify instances of harm, calculate the risk-adjusted harm values, and provide detailed analytics based on many factors including clinical service, patient group (CMG/HIG), unit and physician. We also integrated our visit-level analysis screens to enable users to review individual cases in order to get a better understanding of the patients involved with the harm occurrences.
The hypothesis was that this information will provide insight into process issues with specific patient groups, units, or physicians. These insights will allow targeted quality improvement initiatives resulting in improved care paths and new patient safety protocols.
WHAT WE FOUND
Working with our partner hospitals, we found that in addition to pointing them toward existing, internally identified problem areas the analytics also highlighted issues that were not previously known. For the areas that administrators and clinical leaders were already aware of, the tool provided a more nuanced understanding of their performance.
Hospitals were also surprised to learn about the number of hospital harm cases that had a very low likelihood of harm at admission (a low Charlson score), which has led to targeted reviews of documentation and coding practices in specific areas. This contributed to increased legitimacy when engaging physicians in harm reduction strategies by addressing their concerns about comorbidities, risk adjustment, and data quality. This also highlighted the equal importance of accurately capturing the patient complexity for both harm and non-harm cases.
The main goal of our recent work was to evaluate whether the analytics provide sufficient value to hospitals as part of an ongoing harm reduction strategy, despite certain shortcomings. Many limitations of the CPSI framework are outlined in this excellent document. Here are interesting highlights from that document along with some of our findings based on the hospital feedback of our analytics tools:
- Risk calculations need to evolve. As with any quality improvement initiative, baselining performance is very important in order to understand if you’re getting better or worse. Tracking risk-adjusted harm rates is crucial because higher complexity patients are at higher risk of harm and this variability must be factored in to normalize analysis over time. The risk calculations need to be refined further to properly account for these additional factors.
- Timeliness is very important. It is very challenging to improve performance if you are only able to monitor your results on an annual basis. With the timelier data that we work with, hospitals may catch issues before they cause further harm. A second issue is, because of the limitations of the data set used by CPSI and CIHI (DAD), harm incidents are not reported until discharge and it is not possible to link an occurrence of harm to the date (e.g. when a fall occurred, or when a hospital acquired infection was detected during a patient’s stay). For longer admissions, this makes it difficult to evaluate the effectiveness of associated QI initiatives.
- Clinician-level analysis is valuable but sensitive. We’ve heard that our harm analysis tool has identified specific physicians that were already being reviewed, but we’ve found research on the topic of physician-level quality reporting limited. There are also sensitive issues associated with the perception of assigning blame to specific physicians. We have been working on a physician scorecard that we think is extremely valuable, but has been met with some trepidation. Some hospitals have adopted a similar approach and have found value. We believe that as more hospitals adopt the principles of high reliability organizations, including fostering a just culture and a focus on continuous learning, physician-level reporting will be adopted as an enabler of improvement.
- Clinician-level analysis is challenging. When it comes to physician-level reporting, care must be taken when analyzing results. For instance, some physicians have more complex cases and the current risk adjustment calculations do not always account for this. Finally, if there truly is an issue with an individual clinician as opposed to a systemic issue, it is difficult to attribute instances of harm to specific clinicians using the DAD abstract records, particularly for non-surgical adverse events.
- Unit-level reporting is valuable. There has been significant research (here, here, and here) that indicates that unit-based incident reporting and quality improvement targeting has added greater value compared to traditional hospital-wide or national reporting systems. We believe that our analytics provides value here beyond the publicly reported statistics.
- Hospital peer comparison may help. Peer comparison may be of use partly because the risk calculations aren’t comprehensive. External benchmarking against peer hospitals could be an interesting way to judge how well a hospital is doing by getting a sense of how often certain types of harm occur elsewhere, particularly in harm areas that occur as part of an expected care path. Unfortunately, peer comparison is not currently possible using the distributed provincial case costing (OCCI) data because the data elements necessary to calculate harm are not included. Additionally, not all hospitals have access to the OCCI benchmark dataset.
- The underlying data is not complete. We have seen as part of the work we do with data quality for hospital funding, under-coding patient abstracts is a large issue, particularly for smaller hospitals with limited coding and Decision Support resources. This problem does not go away with dedicated incident reporting systems where most harm occurrences may not be captured. Using coded abstracts to track harm may miss patient complexities (comorbidities) that affect risk adjustments or miss the adverse events themselves. However, we believe that the data that are present are sufficient to provide value.
There’s growing evidence that indicates fostering a culture of safety is an essential part of an effective strategy to reduce harm. Proactively monitoring and responding to harm promotes this culture of safety in hospitals and leads to the adoption of evidence-informed practices that improves quality of care. Like implementing many process improvement strategies, the first steps for a hospital are the most difficult. A simple, yet effective, starting point provides the best chance at long term success and maturity.
While the CPSI methodology to identify harm is not perfect, it certainly acts as an effective “Geiger counter” for detecting specific areas in the hospital that require attention. The current framework is a step in the right direction for patient safety and can form a solid foundation for a hospital. We will continue to evolve our analytical tools as the methodology to identify harm is refined.
This article was written from the perspective of analytics using the standardized harm identification methodology and calculations developed by Canadian Patient Safety Institute (CPSI). We are neither clinicians nor patient safety experts and are not providing commentary on the harm classification methodology itself.