Originally emerging in the 1940s, artificial intelligence (AI) has enveloped the globe on the tail end of the COVID-19 pandemic. Its meteoric rise, triggered in part by the open-sourcing of ChatGPT and other technologies, promises new applications revolutionizing health care and public health services.

But at what cost? As repeatedly observed with introductions of prior technologies, the hope of astronomical benefits portends potentially catastrophic risks. On May 30, 2023, tech leaders and others issued a joint statement comparing the risks of unfettered AI to global pandemic and nuclear threats. Assuring accountability and control of AI in the interests of public health and safety are premier concerns among law- and policy-makers. Yet, efforts to regulate AI to date are diffuse. As explored in this commentary, the legal race is on to curb AI technologies before significant, adverse public health impacts arise. 

Rise of AI

AI’s “boom” traces back to the introduction in 1943 of a mathematical model for a neural network focused on real-life language and communication patterns. Modern AI centers around machine learning (ML), which interprets data and algorithms to mimic human comprehension and abilities. Early examples of AI ML include robotic assembly-line work and IBM’s “Deep Blue” chess program in 1997 (capable of beating Grandmasters). By the early 2000s, AI ML capabilities accelerated through use of “big data,”  (i.e., large, complex dataset processing) which fueled new capabilities, including facial recognition software, Siri (Apple), Alexa (Amazon), and Waymo autonomous vehicles.

COVID-19 proved a catalyst for AI’s recent escalation. As global trade and business flatlined at the pandemic’s inception, new technologies were explored to address prevalent needs. Massive investments and adoption of AI technologies were rapidly undertaken: “55% of companies reported accelerating their AI strategy in 2020 due to COVID.” Open-sourcing, including the global introduction of ChatGPT in November 2022, heightened consumers’ exposure to and use of AI technologies. In January 2023 alone, an estimated 100 million people accessed ChatGPT and other AI sites providing narrative responses to submitted questions.

Projected Uses of AI

The utility of AI is being explored in industries, professional settings, and across classrooms globally with mixed results. AI technologies are by no means a replacement for human intellect. Their tendencies to “hallucinate” (i.e., fill gaps in generated responses with fictional information) have resulted already in nationally-publicized examples of AI limits and pitfalls.

However, the capacity of ML to contribute to human endeavors under close assessment is pronounced especially in health care, pharmaceutical, and public health settings. Uses of “augmented intelligence” supplement human capabilities in clinical health care by interpreting data or medical tests and completing administrative tasks. ML technologies, for example, can extract data from images to help clinicians detect and examine tumors. As the U.S. Government Accounting Office observed in September 2022, “[a]daptive ML diagnostic technologies…may provide more accurate diagnoses or information by incorporating additional population or individual data.”

Pharmaceutical manufacturers use AI to help create, monitor, and assess specific drugs. Earlier this year, the U.S. Food and Drug Administration (FDA) predicted that increased efficiencies in drug development and manufacturing tied to AI advancements will result in greater access to medications, improved drug safety, and new drug classes. FDA’s Data Modernization Plan purports to update its own time-consuming regulatory processes to accommodate novel technologies. Public health officials may wield ML to amass, interpret, and analyze data; identify emerging conditions, risks, or trends; and project individuals’ likelihood of developing certain illnesses (e.g., pancreatic cancer, Alzheimer’s disease, hip dysplasia) lending to preventive care.

Emerging Risks

As with other emerging technologies, the rewards of AI are coupled with numerous risks to election integrity, financial and labor markets, and, notably, human health. Concerns are surfacing already that AI tools may generate phony photographs, sound bites, and videos that mislead voters in elections. Economic impacts of AI are exemplified by the social media circulation in May of a fake image of a building explosion near the Pentagon that led to an immediate dip in the U.S. stock market.

Direct threats of AI to human health are disquieting. Medical misdiagnoses, harmful health messaging (e.g., chatbot-generated dieting information shared on an eating disorder helpline), and mental health risks (e.g., emotional burdens of deciphering reality from AI illusions) are illustrative. In May 2023, the World Health Organization (WHO) expressed unease over non-consensual uses of patient-specific data to train AI tools and concomitant racial biases and discriminatory impacts. As NPR reported in June 2023, a 2019 AI study used algorithms suggesting Black patients referred for additional health care needed to be sicker than White patients. These findings were attributable to Black patients historically spending less on health care due to lack of access. The “Catch 22” of the study is that AI would continue to perpetuate poor health outcomes among Black Americans.

Industry Alarms Over Catastrophic Risks

AI’s emerging risks rose to “profound” and catastrophic “extinction” levels with the circulation of two open letters on March 22 and May 30, 2023. The March letter — signed by over 33,000 individuals — suggests AI might circulate propaganda, compete with and replace humans, and assume “control of our civilization.” Signatories called for a 6-month AI training pause and protocols to ensure safety “beyond a reasonable doubt.” Their concerns resonated earlier trepidations proffered in 2021 by the International Committee of the Red Cross that new warfare technologies may lead to catastrophic decision-making or mistakes, especially as AI performs tasks at speeds exceeding human capabilities to intervene.

The May 2023 statement — signed by over 600 individuals, including several AI CEOs — is even more staggering. “Mitigating the risk of extinction from AI,” they warned, “should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The Center for AI Safety, which published the intentionally brief statement, cites additional AI risks including “weaponization,” “misinformation,” and “power-seeking behavior.” Together these industry-sponsored warnings underscore the need for advance regulation ahead of AI’s theoretical and potentially dangerous evolutions.

Legal Repercussions and Options

“Red flags” regarding the dangers of unfettered AI technologies warrant protective legal and policy measures. In September 2017, Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, recommended that regulators focus on “tangible impact[s]” of AI rather than “defin[ing]” or “rein[ing] in” an “amorphous” field. As the field and risks of AI materialize, however, law- and policy-makers are shifting toward how to balance national security and safety risks with the promotion of technological advancements and information access.  

Emerging governance proposals to regulate AI, guided in part by industry lobbying, are mounting. Global leaders at the G7 Annual Summit on May 20, 2023 released a joint statement committing to discussions on “inclusive [AI] governance and interoperability.” That same month, the European Union (EU) Parliament approved a comprehensive slate of AI regulations awaiting review by the EU Council. Risk reduction protections of the EU AI Act govern specific applications such as law enforcement, voting, and social media networks. China, Brazil, and other nations are pursuing their own regulatory approaches.

Meanwhile, U.S. congressional leaders boning up on AI technologies suggest it may be months before comprehensive federal legislation is introduced to supplement their meager existing enactments. The John S. McCain National Defense Authorization Act of 2019 struggles to even define AI. The National AI Initiative Act of 2020 largely defers to the president and federal agencies to assess AI potential and risks.

Correspondingly, the White House Office of Science and Technology Policy released a five-part “AI Bill of Rights” providing initial guidance in October 2022. In January 2023, the National Institute of Standards and Technology published an “AI Risk Management Framework.” In May 2023, Federal Trade Commission (FTC) Chair Lina Lahn asserted jurisdiction over large language model neural networks (like ChatGPT) on grounds they may impact fair competition and lend to deceptive practices. Multiple state legislatures have introduced an array of piecemeal AI bills with limited foci and impacts.

Collectively, these legal maneuvers to regulate AI implicate manifold constitutional and other legal concerns over jurisdiction, authorities, business interests, civil liberties, and individual rights. A wave of forthcoming litigation has already begun with multiple class action lawsuits against industry leaders on grounds their chatbots improperly mined copyrighted data online. In June 2023, a public figure in Georgia alleged ChatGPT generated defamatory information suggesting he embezzled money.

Emerging legal efforts to respond affirmatively to public health and safety risks of AI center around international agreements based on existing governance frameworks aligned with humanitarian laws. Stakeholders call for broad multi-country coalitions governing “democracy-affirming” technologies that promote human rights and combat authoritarianism. U.S. diplomatic agreements with the EU, Israel, India, and other jurisdictions are under negotiation. The hope is that global accord over AI controls may be advanced through these and other alliances. Absent common grounding and commitment to protecting the public’s health, however, the perceived threats of AI may someday reflect reality.

__________

James G. Hodge, Jr., J.D., LL.M., is the Peter Kiewit Foundation Professor of Law and Director at the Center for Public Health Law and Policy at the Sandra Day O’Connor College of Law, Arizona State University (ASU).

Leila Barraza, J.D., MPH, is associate professor of the Mel and Enid Zuckerman College of Public Health at the University of Arizona.

Jennifer L. Piatt, J.D., is co-director and research scholar at the Center for Public Health Law and Policy at the Sandra Day O’Connor College of Law, ASU.

Erica N. White, J.D., M.P.H. candidate, is a research scholar at the Center for Public Health Law and Policy at the Sandra Day O’Connor College of Law, ASU.

Samantha Hollinshead, J.D. candidate, is a legal researcher at the Center for Public Health Law and Policy at the Sandra Day O’Connor College of Law, ASU.

Emma Smith, J.D. candidate, is a legal researcher at the Center for Public Health Law and Policy at the Sandra Day O’Connor College of Law, ASU.