Exploring Liability Risks of Using AI Tools in Patient Care

Exploring Liability Risks of Using AI Tools in Patient Care

Research led by SHP’s Michelle Mello provides some clarity regarding liability over AI technologies that are rapidly being introduced to health care. She and her co-author analyzed more than 800 tort cases involving both AI and conventional software in health care and non-health-care contexts to see how decisions related to AI and liability might play out in the courts.
Illustration of AI in health care Shutterstock

Last year, large language models like ChatGPT were widely released for the first time, and within a few months, similar models were already being incorporated into medical record software.

Medicine rarely incorporates cutting-edge technology so rapidly, and the integration of AI tools makes many clinicians anxious. As the health care industry grapples with the best way to use these technologies to improve care, many clinicians may wonder what happens if patients are harmed, and who should be held liable.

Research led by Michelle Mello, JD, PhD, professor of law and health policy, is designed to provide some clarity regarding liability. AI software has not yet appeared in legal decisions with much frequency, so Mello and her co-author, JD-PhD candidate Neel Guha, analyzed more than 800 tort cases involving both AI and conventional software in health care and non-health-care contexts to see how decisions related to AI and liability might play out in the courts.

An article about their research published Jan. 18 in the New England Journal of Medicine. Mello discusses their findings and what it means for health care providers in this Q&A.

How did you approach this research?

We investigated the extent to which litigation over AI-related personal injuries is already appearing in judicial decisions to understand the extent of liability risk. The signals that emerge from the courts specifically related to AI are pretty faint, but there are enough cases related to non-AI-enabled software causing injury to give us a sense of how courts are likely to approach these kinds of claims in the future.

That's important because lawyers tend to give advice that's very conservative. We didn't find that lawyers are advising clients not to use AI in medical settings, but we found presentation materials suggesting they are strongly warning clients about the liability risks of using AI in general. In my opinion, this could lead to overly conservative decision making -- not doing things that could really help patients.

 

Read Full Q&A in Scope

Read More

Michelle Mello and Neel Guha
Commentary

ChatGPT and Physicians’ Malpractice Risk

In this JAMA Forum perspective, SHP's Michelle Mello, professor of health policy and of law, and Neel Guha, a Stanford Law School student and PhD candidate in computer science, write that medical advice from AI chatbots is not yet highly accurate, so physicians should only use these systems to supplement more traditional forms of medical guidance.
cover link ChatGPT and Physicians’ Malpractice Risk
Woman in health care looks at computer images
Commentary

President Biden’s Executive Order on Artificial Intelligence—Implications for Health Care Organizations

SHP's Michelle Mello and Stanford Medicine colleagues write in the journal JAMA that President Biden's recent executive order on Artificial Intelligence could have significant implications for health-care organizations.
cover link President Biden’s Executive Order on Artificial Intelligence—Implications for Health Care Organizations
COVID vaccines
Commentary

Vaccination Mandates—An Old Public Health Tool Faces New Challenges

Michelle Mello and colleagues write in this JAMA Network Viewpoint that civic values were eroded during the COVID-19 pandemic, creating a groundswell of resistance to vaccines that have been a bedrock principle of U.S. public health policy.
cover link Vaccination Mandates—An Old Public Health Tool Faces New Challenges