Looking Beyond the Pandemic – AI and Digital Health Kick into Warp Speed

What This Means for the FDA and the Industry

The COVID-19 pandemic has spurred the development of digital health products utilizing artificial intelligence (AI) and machine learning (ML). When the pandemic passes, there will be even more opportunities and challenges in this space. Medical device manufacturers should be prioritizing currently evolving digital health regulations and products liability principles for the “after times.”


Digital health products are medical software or hardware devices with a substantial software component. Examples include smartphone apps that manage or diagnose diseases, chatbots and games that provide virtual mental health therapies, radiological algorithms that detect breast cancer in CT images, wearable or implantable health monitors, and high-tech digital stethoscopes.

Unlike analog devices, digital health products are typically interactive, multifunctional, connected, and dynamic. Users—both patients and healthcare workers—can input data to and receive feedback from digital devices. Digital devices can follow precise, multi-step instructions without human supervision. They can be personalized to their users and adapt to changes in their environment. They can send data over wireless networks to physicians remotely or connect with other devices. Digital devices often update or expand their functionality without hardware modifications.

For the purposes of FDA regulation, digital health does not encompass devices that do not have a truly medical purpose or are not sufficiently digital. Technologies not considered digital health products include teleconferencing software that merely enables videocalls between patients and doctors, electronic databases that only store medical records, or hardware devices that use software to translate users’ inputs into hardware movements.1

Like online advertisements, credit decisions and self-driving vehicles, digital health products are rapidly evolving thanks to advances in AI and ML.

“While the application of AI/ML to medical devices is still in early stages, it is already beginning to boost device functionality, efficacy, and autonomy.”

AI is the ability of a software or hardware device to make decisions on its own. A Roomba is an example of a device with a simple AI; it “decides” to turn, stop, or go based on sensors and preprogrammed logic. ML is the capability of a device to improve its logic independently from its manufacturer. Modern chess algorithms, for example, are not preprogrammed with moves; instead, they learn how to play from records of past games or by playing themselves. Because of breakthroughs in processor speed and AI/ML techniques, the sophistication of digital devices is taking off, eclipsing human performance in a growing number of fields.

While the application of AI/ML to medical devices is still in early stages, it is already beginning to boost device functionality, efficacy, and autonomy. For example, in February 2020 the FDA authorized the first software that uses AI/ML to enable non-experts to take ultrasound images of diagnostic quality.2


The FDA started investing in digital health in earnest in 2017, when it published the Digital Health Innovation Action Plan, which outlines the FDA’s intention to facilitate the safe development of digital health products.3 Since then, the agency has been clarifying the categories of digital products it will regulate.4 It launched the Software Pre-Cert Program in 2019 as a pilot program to speed up the authorization of medical software by pre-certifying trustworthy developers and omitting a full premarket review of their digital products.5 Tech and pharma companies, such as Apple, Fitbit, Johnson & Johnson, and Roche, were enlisted in this pilot program. Later that year, the FDA published for discussion a framework for regulating AI/ML-based medical devices.6

“Digital health products were crucial to the pandemic response.”

In 2020, the COVID-19 pandemic presented an extraordinary challenge to the healthcare system and heightened demand for digital health products. Early on, healthcare workers had to take turns staying home to reduce the risk of outbreaks or quarantine because of exposure. Surgeries had to be prioritized, postponed, or cancelled to prepare for potential surges in COVID-19 caseloads. Meanwhile, demand for certain kinds of healthcare spiked. The pandemic exacerbated mental illnesses and preexisting health inequities. Patients with urgent medical needs aggravated their conditions by skipping necessary treatment out of fear of infection.

Digital health products were crucial to the pandemic response. Telemedicine, smartphone apps, and wearables allowed patients and physicians to remotely retrieve and send electronic health data, and apps could push basic medical information automatically without human supervision. Digital health products helped stretch constrained and overburdened healthcare resources.7

To promote the uptake of digital health products, in March 2020 the FDA announced temporary policies to suspend enforcement of certain FDCA requirements for lower-risk products with basic or limited functionality,8 and to allow manufacturers to make minor modifications to the indications, claims, and functionality of some non-invasive remote monitoring devices.9 A month later, the FDA advised it would temporarily loosen regulation of digital therapy tools for psychiatric disorders like depression, substance abuse, anxiety, autism, insomnia, ADHD, OCD, and PTSD.10 These included Class II, prescription-only devices.

While these policies are temporary, the FDA recognizes the need for digital health products is here to stay. It has expressed hope that increased adoption of digital health solutions facilitated by these temporary measures will stick and manufacturers will use this opportunity to collect valuable real-world data useful for after the public health emergency has lifted.

“The pandemic was an important catalyst for change. But this is only the beginning of a sweeping transition of the U.S. healthcare system into the digital era.”

To further its long-term commitment to digital health, the FDA also launched the Digital Health Center of Excellence in 2020. This established a group of experts within the agency dedicated to digital health, providing a centralized point of contact for digital health manufacturers and developing the FDA’s permanent digital health policies.11

The pandemic was an important catalyst for change. But this is only the beginning of a sweeping transition of the U.S. healthcare system into the digital era.12 Digital innovations will continue to provide improvements in the delivery of healthcare and needed relief from growing demands on the healthcare system due to climate change,13 expanding medical deserts,14 an aging population, and preparations for the next pandemic.


A smartphone app that reminds a patient to take her medications, keeps track of her compliance, and updates her medical records has a low risk of injury to a patient. It can’t physically harm her or cause inflammation or infection. While there may be remote instances where a digital health product could cause harm, so far most digital health products are low-risk.15

As software is utilized in increasingly powerful tools, the potential for harm will grow. A laser scalpel guidance system’s mistake could be life-threatening. Non-hardware products, such as diagnostic software, could be just as dangerous. A mental health chatbot could cause stress or evoke a painful memory, triggering PTSD. Apps that supply medical information could do a lot of damage if they spread misinformation.

Promoting safety and efficacy in digital devices presents unique challenges for regulators. First, the development cycle for digital devices tends to be shorter than for analog devices. This strains the review process to keep pace. Second, the risk of digital products, especially AI/ML ones, is hard to evaluate pre-market because it can change after launch. A cloud-based AI/ML algorithm, for example, could learn to diagnose images or prescribe treatment in ways that not even its manufacturer originally contemplated.

One way the Software Pre-Cert Program addresses these issues is by scrutinizing a manufacturer’s “culture of excellence” with respect to values like product quality, safety, transparency, and social responsibility.16 This appraisal is more involved than a traditional audit of company processes. Products from pre-certified organizations get a streamlined premarket review process. The review pathway for other products depends on the product’s risk category, as determined by factors like intended use.

“With AI/ML, the FDA must be especially careful not to stifle innovation or diversity with overly restrictive standards.”

According to the FDA’s “AI/ML-based Software as a Medical Device Action Plan,” the FDA intends to publish its first draft guidance on AI/ML algorithms (or “Predetermined Change Control Plans”) in 2021.17 The Pre-Cert Pilot Program is entering its third year and is still working on building each aspect of that process. The next step, hopefully soon, is to beta-test the program with more participants.

With AI/ML, the FDA must be especially careful not to stifle innovation or diversity with overly restrictive standards.
AI/ML’s capacity for unconventional thinking is what enabled computers to, for example, discover new ways to play Go, the oldest continuously played board game, and to defeat human grandmasters. For digital health, the opportunity cost is not ancient board game strategies but lifesaving medicine.


Matching AI/ML development to traditional regulatory processes and legal doctrines raises many questions. Uncertainty on the regulatory side has led to many comments on the FDA’s proposed regulatory framework for AI/ML-based medical devices. The range of comments run the gamut from broad policy considerations to specific regulatory provisions, technical to ethical, prescriptive to reflective, and practical to practically impossible. Some addressed overall oversight responsibilities: how would the different FDA Centers and their respective guidances and requirements interact given the potential implications of drugs, biologics, and medical devices with AI/ML systems?18 Others noted difficulties in developing practices given the broad spectrum of AI/ML: from locked algorithms to continuously adaptive systems.19

Not surprisingly, the novel proposal has drawn concerns the framework rests on concepts that are not yet defined,20 and needs clearer definitions, guidance, and even templates to better understand and implement proposed approaches.21,22 To meet these concerns, commentators offered specific examples of existing standards and emphasized collaboration between the FDA, the industry, academia and other regulatory bodies in continuing to define and develop a framework.23

Uncertainty also exists because traditional legal theories are just starting to be tested on truly sophisticated AI/ML products. Examples of the conceptual challenges of projecting analog principles on digital products include:

Design defect: Existing legal tests—risk-utility balancing, reasonable alternative design, and consumer expectations—are especially difficult to apply to AI/ML products. A full analysis of the utility of a robust AI/ML algorithm is not always realistically possible. A less robust algorithm may seem like a reasonable alternative design because it eliminates a known risk, but at what cost? Jurors have little experience with AI/ML devices and likely have unreasonable expectations with respect to their safety and efficacy.

Proximate cause: Any theory of product liability requires showing a close causal link between the injury and the alleged defect. A manufacturer’s liability does not extend to highly attenuated and not reasonably foreseeable consequences of its actions. For AI/ML products, what should a manufacturer be fairly expected to foresee when unplanned change is a key function? Is design defect determined by the initial algorithm or the specificity of a Predetermined Change Plan?

Comparative fault: To what extent is the user responsible for (or capable of) appreciating the risks of AI/ML devices? Should a physician or patient be allowed to wholly depend on a digital device’s recommendations? Some AI/ML is designed to require user input, other technologies are wholly autonomous, and others are a blend. Apportioning fault when something goes wrong may itself require AI.

Failure to Warn: How can a risk associated with use of AI/ML be described when algorithms are automatically and continuously updated, leading to different assessments and potentially different outcomes? How do warnings provide adequate transparency into the processes employed by the algorithm when the intended users are healthcare providers and not computer engineers?


Is it too soon to worry? The FDA’s digital health policies are only in the early stages of development, and most current digital products are still low-risk and not likely to be subjected to burdensome or surprising oversight. Some commentators are hedging their predictions about digital health and doubt what the FDA will accomplish in the short term.

Nonetheless, digital health law should be a top priority now. The FDA takes seriously the potential need for digital health in the long run and is following through on its plans to develop the regulatory infrastructure. Manufacturers should pay close attention and engage in the collaborative process intentionally. Follow developments with the FDA’s draft guidance on reviewing Predetermined Change Control Plans and developments in the Pre-Cert Pilot Program. Submit comments.

With respect to liability, look for ways to promote clarity regarding AI/ML liability from federal and state legislatures. Reliance on traditional case law and inexperienced jurors will lead, at best, to highly unpredictable results; at worst, they threaten the future of digital health in the United States.

AI/ML health products have enormous potential to transform healthcare and save lives. In the future, smart apps, personal health monitors, and digital implantables may help individuals identify symptoms, manage their care, and dispense treatment in the comfort of their homes. For complex medical needs, robots and enterprise-scale algorithms may help healthcare workers schedule and triage patients, take vital signs and draw blood, diagnose images, and even anesthetize and operate on patients. For certain tasks, digital devices may completely replace humans. The COVID-19 pandemic has pushed demand for digital health and the need for effective policies. To be able to deliver on future demand, companies should invest now to meet evolving regulatory and litigation standards.

121st Century Cures Act, Pub. L. No. 114-255, § 3060, 21 U.S.C. § 360j(o) (2018).

2FDA Authorizes Marketing of First Cardiac Ultrasound Software that Uses Artificial Intelligence to Guide User, FDA (Feb. 7, 2020), https://www.fda.gov/news-events/press-announcements/fda-authorizes-marketing-first-cardiac-ultrasound-software-uses-artificial-intelligence-guide-user.

3Digital Health Innovation Action Plan, FDA (2017), https://www.fda.gov/media/106331/download.

4Statement on New Steps to Advance Digital Health Policies That Encourage Innovation and Enable Efficient and Modern Regulatory Oversight, FDA (Sept. 26, 2019), https://www.fda.gov/news-events/press-announcements/statement-new-steps-advance-digital-health-policies-encourage-innovation-and-enable-efficient-and.

5FDA Selects Participants for New Digital Health Software Precertification Pilot Program, FDA (Sept. 26, 2017), https://www.fda.gov/news-events/press-announcements/fda-selects-participants-new-digital-health-software-precertification-pilot-program; Statement from FDA Commissioner Scott Gottlieb, M.D., on the Agency’s New Actions Under the Pre-Cert Pilot Program to Promote a More Efficient Framework for the Review of Safe and Effective Digital Health Innovations, FDA (Jan. 7, 2019), https://www.fda.gov/news-events/press-announcements/statement-fda-commissioner-scott-gottlieb-md-agencys-new-actions-under-pre-cert-pilot-program.

6FDA, Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) – Discussion Paper and Request for Feedback, REGULATIONS.GOV (Apr. 1, 2019), https://www.regulations.gov/document?D=FDA-2019-N-1185-0001.

7Enforcement Policy for Digital Health Devices for Treating Psychiatric Disorders During the Coronavirus Disease 2019 (COVID-19) Public Health Emergency, Guidance for Industry and Food and Drug Administration Staff, FDA (Apr. 14, 2020), https://www.fda.gov/regulatory-information/search-fda-guidance-documents/enforcement-policy-digital-health-devices-treating-psychiatric-disorders-during-coronavirus-disease.

8Digital Health Policies and Public Health Solutions for COVID-19, FDA (Mar. 26, 2020), https://www.fda.gov/medical-devices/coronavirus-covid-19-and-medical-devices/digital-health-policies-and-public-health-solutions-covid-19.

9Enforcement Policy for Non-Invasive Remote Monitoring Devices Used to Support Patient Monitoring During the Coronavirus Disease 2019 (COVID-19) PublicHealth Emergency (Revised), Guidance for Industry and Food and Drug Administration Staff, FDA (Oct. 28, 2020), https://www.fda.gov/regulatory-information/search-fda-guidance-documents/enforcement-policy-non-invasive-remote-monitoring-devices-used-support-patient-monitoring-during.

10Enforcement Policy for Digital Health Devices for Treating Psychiatric Disorders During the Coronavirus Disease 2019 (COVID-19) Public Health Emergency, Guidance for Industry and Food and Drug Administration Staff, supra note 7.

11FDA Launches the Digital Health Center of Excellence, FDA (Sept. 22, 2020), https://www.fda.gov/news-events/press-announcements/fda-launches-digital-health-center-excellence.

12Kushal Kadakia, Bakul Patel, and Anand Shah, Advancing Digital Health: FDA Innovation During COVID-19, 3 NPJ DIGITAL ED. 161 (2020).

13Climate Effects on Health, CDC, https://www.cdc.gov/climateandhealth/effects/default.htm (last visited Mar. 9, 2021).

14Eli Saslow, ‘Out Here, It’s Just Me’: In the Medical Desert of Rural America, One Doctor for 11,000 Square Miles, WASH. POST (Sept. 28, 2019), https://www.washingtonpost.com/national/out-here-its-just-me/2019/09/28/fa1df9b6-deef-11e9-be96-6adb81821e90_story.html.

15Cybersecurity and privacy risks are additional concerns not discussed in this article.

16Developing a Software Precertification Program: A Working Model v1.0, FDA (Jan. 2009), https://www.fda.gov/media/119722/download.

17Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan, FDA (Jan. 12, 2021), https://www.fda.gov/media/145022/download.

18For example, “A key consideration for sponsors is how FDA is managing the regulatory review of these software products. We believe that it is critical for FDA Centers (CBER, CDER and CDRH) involved in the review of software to adopt coordinated and consistent processes and policies. Importantly, the proposed framework does not address how the review of AI/ML-based SaMD (which can be associated with drugs, biologics, medical devices, or combinations of these) will be regulated.” Comment from Novartis Pharmaceuticals Corporation, REGULATIONS.GOV (June 2, 2019), https://www.regulations.gov/comment/FDA-2019-N-1185-0094.

19“Sanofi recognizes the challenges in enumerating an exhaustive list of GMLP best-practices given the diverse applications for AI/ML-based SaMD. Different contexts may necessitate different GMLP requirements.” Comment from Sanofi, REGULATIONS.GOV (Dec. 13, 2018), https://www.regulations.gov/comment/FDA-2019-N-1185-0047.

20“Our foremost concern is that the AI/ML framework is predicated on developers/manufactures [sic] adherence to Good Machine Learning Practices (GMLP), and at this time no such standards exist and we believe there remains a significant amount of community work required to define GMLP.” Comment from Microsoft, REGULATIONS.GOV (June 3, 2019), https://www.regulations.gov/comment/FDA-2019-N-1185-0057.

21“We would welcome clear guidance or templates around critical aspects of the ‘predetermined change control plan’, such as SPS, ACP or mechanisms to support the publication of real-world performance data.” Comment from Digital Surgery Ltd., REGULATIONS.GOV (May 29, 2019), https://www.regulations.gov/comment/FDA-2019-N-1185-0041.

22“We recommend the proposed framework set clear definitions and expectations for [GMLP], including those pertaining to data management, in order to avoid the uncertainty of shifting Algorithm Change Protocol expectations . . .. These definitions and expectations should be developed with industry input and should build on the existing policies articulated in FDA guidance regarding appropriate data practices.” Comment from Bay Labs, REGULATIONS.GOV (May 31, 2019), https://www.regulations.gov/comment/FDA-2019-N-1185-0049.

23“We appreciate that terminology and principles from the International Medical Device Regulators Forum (IMDRF) (e.g., risk categorizations framework, clinical evaluation) have been included in the proposed framework. Principles and/or alignment to requirements from the American National Standards Institute (ANSI) Association for the Advancement of Medical Information (AAMI) International Electrotechnical Commission (IEC) 62304:2006/A1:2016 could also be considered for the framework.” Comment from Novartis, supra, note 18.