Find care now
If you are experiencing a medical emergency, please call 911 or seek care at an emergency room.

This article was written by Garrett Foresman, BS, and Kristen Miller, DrPH, MSPH, MSL, CPPS.
Our research, accepted for publication in the Journal of Participatory Medicine, provides insight into how patients feel about various artificial intelligence technologies.
Artificial intelligence (AI) technologies are already at work in healthcare, offering ways to improve diagnosis, treatment, and process efficiency. One report found that 90% of hospitals use AI for early diagnosis and remote monitoring. The technology can help providers streamline administrative tasks, analyze large volumes of data, and read imaging.
But how do patients feel about AI in their care?
In a collaborative study, which has been accepted for publication in the Journal of Participatory Medicine, we invited patients to discuss their feelings about AI in healthcare and what factors will be essential to consider as these technologies become part of our new routines.
At the Patient-Partnered Center of Diagnostic Excellence, co-designing our studies with patients is more than a method: It’s a mindset. We know healthcare is about more than doctors. We can improve the diagnostic process by including patients lived experiences in research. Regarding AI, patient insights can help us build trust and ensure new technologies work for everyone.
Interactive activities and engaged focus groups.
Our work is grounded in one simple idea: Patients are essential diagnostic team members. By partnering with patients and families, clinicians, health systems, and researchers, we can improve diagnosis and safety.
The research included 17 participants (4 male, 13 female) who were patients and family members ranging in age from 18 to 80. They reported a variety of health backgrounds, including chronic and complex conditions.
In focus groups, participants answered questions and engaged in interactive activities designed to explore their key concerns, priorities, and desires for AI in healthcare. We examined five AI scenarios, some of which are in development and some already in use at hospitals:
-
Portal messages: After a routine visit with recommended lab work, a patient accesses the portal and finds a chatbot that uses AI to review all records and offer opinions and perspectives.
-
Radiology review: A radiologist initially sees nothing on a CT scan for severe back pain, but AI identifies a herniated disc, which the radiologist then confirms.
-
Digital scribe: Before a routine checkup, the doctor asks permission to use an AI-based app on their phone to listen and document notes based on the visit.
-
Virtual human: A physician diagnoses diabetes after a routine blood count and uses an AI-generated virtual assistant with a human appearance to communicate the diagnosis to the patient via telehealth (without the physician present).
-
Decision support: During a routine wellness visit, an AI system recommends an HIV screening based on the interpreted medical and social history, prompting the clinician to offer the test.
After the focus group, we used thematic analysis to process transcripts and notes to develop our results. In thematic analysis, we organize and interpret our data to uncover ideas that came up repeatedly during discussions. This process allows us to understand the common themes among our participants and generate results that can inform our work in the future.
Related reading: How AI Can Make Clinical Trials More Efficient, Accessible, and Unbiased.
Patient comfort with AI in diagnosis.
Overall, participants were less comfortable when humans were less involved. For example, they
expressed comfort with AI in the diagnostic process if their concerns and expectations are met.
On a scale of one to five, with five being the most comfortable, we calculated participants’ average comfort level with each scenario:
-
Digital scribe: 4.24
-
Radiology review: 4.00
-
Decision support: 3.94
-
Portal messages: 3.68
-
Virtual human: 1.68
When we analyzed the data for each scenario, we identified five key themes that highlighted participants’ perspectives about how we use and talk about AI. These insights include:
Validation:
- Participants said their trust in AI is related to rigorous processes to ensure safety, accuracy, and reliability are priorities.
- They raised questions about how AI systems are developed, trained, and evaluated to meet standards of high-quality healthcare.
Usability:
- Participants discussed the role of AI in helping diagnosis and communication be more effective, efficient, and satisfactory.
- They noted that AI tools should support providers rather than replace human decision-making or interactions.
Transparency:
- Participants identified transparency as an important factor in building trust in AI tools.
- They emphasized the need to understand AI’s role in their care, capabilities, capabilities, and limitations in decision-making and communication.
Opportunities:
- Despite their concerns, participants expressed optimism and excitement about the potential for AI to improve patient engagement, understanding, and comfort.
- Many viewed AI as a valuable tool.
Privacy:
- Participants had concerns about data privacy and security in all scenarios.
- They worried about how their information would be stored, accessed, and used.
These themes help us understand what’s most important to patients so we can work to build AI systems, communication plans, and guidelines that address these concerns.
Related reading: How Health Systems and Policymakers Can Prioritize Patient Safety When Integrating AI.
Next steps: Guidelines for AI development and communication.
This research lays the foundation for more specific guidelines about keeping patient perspectives at the center of AI implementation efforts. How and when we talk with patients about AI can make a big difference in their comfort with new technologies. When these conversations are grounded in respect, transparency, and equity, we can best address patient concerns now and in the future.
Similarly, providers can expect to hear questions from patients about how and when AI is used. Health systems may be able to work with technology companies to address patient concerns about validation, transparency, privacy, and more.
Future studies will begin developing national guidelines for best practices in AI implementation. And we’ll partner with patients to build that research, too.
Related reading: MedStar Health Researchers Partner with Patients to Improve Outcomes in Diagnostic Safety.
Partnering with patients to improve diagnosis.
The Patient-Partnered Diagnostic Center of Excellence is a team of people from many disciplines: health services researchers, human factors engineers, clinical care providers, diagnosticians, informaticists, data scientists, and, of course, patients.
Co-led by investigators from MedStar Health, the University of Toronto, Baylor College of Medicine, and Mothers Against Medical Error, our team is supported by a Patient Steering Committee and a Scientific Advisory Committee. Together, we’re working to reshape how diagnosis is studied, practiced, and improved.
If you’re a patient or a family member interested in participating in patient-centered research on AI or other diagnostic topics, please reach out. Email Dr. Miller or visit our website for more information and to get involved.