How Health Systems and Policymakers Can Prioritize Patient Safety When Integrating AI

How Health Systems and Policymakers Can Prioritize Patient Safety When Integrating AI

Share this
A doctor talks with a patient in an office setting.

MedStar Health Research Institute is leading the way in advocating for guidelines and oversight that prioritize patient safety when using AI and machine learning in healthcare.

 

Artificial intelligence and machine learning are quickly becoming the next wave of technological advances in healthcare. With appropriate guidelines and oversight, these software programs hold great promise for improving our work and keeping patients safe.

As part of a research commentary published in JAMA Health Forum, my co-authors and I put forth recommendations for health systems and policymakers to promote patient safety in the context of AI. President Biden’s Executive Order in October 2023 laid an excellent foundation for the establishment of these guidelines, and our commentary expands on that work.


AI is not new to healthcare. In fact, one study found that 86% of healthcare organizations and life sciences companies already use AI, spending more than $50 million annually on these projects.


Considering its prevalence in our society and patient care toolkit, we must continue to build our knowledge of AI’s benefits and risks so that it aids everyone equally and does not harm anyone.


Help or harm: AI’s impact on patient safety.

In healthcare, AI falls into two primary categories—predictive AI and generative AI. Both can potentially have positive and negative impacts on patient safety. 


Predictive AI

We’re most familiar with predictive algorithms, which identify patterns in data to make predictions about the future. Usually, predictive algorithms can be analyzed to learn when and if something goes wrong.


Predictive AI gives us remarkable power to detect and predict. A radiology algorithm, for instance, is highly effective at identifying abnormalities in an x-ray or other diagnostic images. These technologies have been widely adopted across healthcare with tremendous benefit for patients. Together with these algorithms, radiologists’ have been able to enhance their capabilities significantly. 


When predictive algorithms are inaccurate, however, they have the potential to cause harm. In the radiology example, predictive algorithms can be affected by biasor making recommendations based on what it has inaccurately “learned” about a patient population.


Generative AI

The newer AI on the scene, generative AI like ChatGPT, continues to expand our capabilities. These algorithms are increasingly informed and supported by healthcare processes, enabling users to generate new content like text and images quickly. It is much more difficult for humans to analyze these significantly more complex technologies.


One type of generative AI that could significantly impact clinical practice is called ambient digital scribe. This technology “listens” to conversations between patients and providers, securely and automatically generating notes for the provider. This allows the provider to pay more attention to the patient than their documentation and could lead to improved diagnosis and treatment. 


Ambient digital scribes could improve patient safety and experience and reduce the documentation burden on providers. Of course, if the algorithm does not take accurate notes, it could harm patients. If care teams become too reliant on this technology, details could be missed, and bias could creep in. 


To ensure both predictive and generative AI achieve their promise of improving patient safety, it’s important that health system leaders and policymakers carefully consider how oversight of these technologies is managed.


Prioritizing patient safety in artificial intelligence applications.

We put forth three concrete recommendations to ensure AI technologies prioritize patient safety.

  1. Create guidelines for clinical use of AI that move toward safe and equitable use. These guidelines should be created and supported by federal agencies so all healthcare facilities can adopt these standards. This process will take time. Healthcare organizations can get started by creating their own guidelines around safety and equity.
  2. Develop monitoring systems to assess patient safety risks. Unlike other technologies, AI can change over time, becoming inaccurate or introducing bias. To monitor AI accuracy, test cases should be created. These could be used to test AI technologies on a regular basis, ensuring that the output they generate is accurate and free from bias. 
  3. Ensure traceability is a part of all AI algorithms. Traceability is the ability to identify what went wrong in an algorithm if something does. Ideally, an agreed upon set of standards in algorithm development would allow for the element of the technology to be inspected. While national standards may take time to develop, health systems can start now by insisting AI developers include underlying metadata to make traceability possible.

Delivering safe, high-quality care.

Creating standards for patient safety in AI will take time, and technology is continuously developing faster than policy can keep up. We can’t rely on policy to keep patients safe. Thoughtful adoption of AI technologies will require healthcare leaders to identify projects with the most benefit and least risk to implement first. 


At MedStar Health and MedStar Health Research Institute, our reputation for delivering the best, safest care means we’re already considering the risks and benefits of these new technologies and how to bring them to our patients safely. 

As we consider AI developers and technologies, we’re asking important questions about safety and traceability, ensuring we have methods in place to safely monitor any AI technologies we use in patient care.


Want more information about the MedStar Health Research Institute?

Discover how we’re innovating for tomorrow.

Explore With Us

Categories

Stay up to date and subscribe to our blog

Latest blogs