President Biden’s Executive Order on Artificial Intelligence—Implications for Health Care Organizations

President Biden’s Executive Order on Artificial Intelligence—Implications for Health Care Organizations

SHP's Michelle Mello and Stanford Medicine colleagues write in the journal JAMA that President Biden's recent executive order on Artificial Intelligence could have significant implications for health-care organizations.
Woman in health care looks at computer images Getty Images

 

President Joe Biden issued an executive order on Oct. 30 addressing his administration’s security concerns surrounding the dramatic rise of artificial intelligence — and proclaiming the technology must be governed.

“Artificial intelligence (AI) holds extraordinary potential for both promise and peril. Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure,” the order reads. “At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.” 

Though much of the executive order relates to cybersecurity and AI use in non-health care sectors, SHP’s Michelle Mello, a professor of health policy and professor of law, and her Stanford Medicine coauthors write in this JAMA viewpoint the executive order has several provisions that affect health-care organizations.  

“The order directs federal agencies to vigorously enforce existing laws to combat AI uses that constitute unfair or deceptive business practices, privacy violations, or discrimination,” writes Mello, PhD, JD; Nigam Shah, MBBS, PhD; and Danton Char, MD. “Given concerns about unfair applications of AI tools in health care delivery and insurance coverage, health care facilities and insurers could find themselves in the bullseye of law-enforcement efforts. Other cross-sectoral activities, such as promoting competition among AI developers and supporting workers through AI-related workforce disruptions, will also touch health care.”

The commentary reflects ongoing research to develop an ethical review process of AI in medicine, supported by a grant from the Stanford Institute for Human Centered Artificial Intelligence (HAI) and Stanford Health Care. 

Read the Full Viewpoint

Read More

Female Healthcare Worker Examines Baby with Stethoscope
Q&As

The Safe Inclusion of Pediatric Data in AI-Driven Medical Research

AI algorithms often are trained on adult data, which can skew results when evaluating children. A new perspective piece by SHP's Sherri Rose and several Stanford Medicine colleagues lays out an approach for pediatric populations.
cover link The Safe Inclusion of Pediatric Data in AI-Driven Medical Research