The FDA’s 2026 Clinical Decision Support (CDS) Guidance Update - What’s Changed?
It’s been 4 years since the FDA updated their CDS guidance. The new 2026 version brings clarification, more examples, and some pragmatic relaxation of how to interpret the CDS criteria. But are the changes really that substantial?
What is the CDS guidance for?
FDA’s definition of a medical device is broad and covers any software that is “intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease, in man or other animals”. Within this definition, FDA has carved out certain software functions which may be a) either very low-risk and used for general wellbeing (described in the General Wellness Guidance), or b) providing supportive information to help clinicians make decisions (described in the Clinical Decision Support Guidance - the topic of this insight piece). To be classed as a non-medical device CDS, a product has to meet 4 strict criteria:
Not intended to acquire, process, or analyse a medical image or a signal from an in vitro diagnostic device or a pattern or signal from a signal acquisition system.
Intended for the purpose of displaying, analysing, or printing medical information about a patient or other medical information (such as peer-reviewed clinical studies and clinical practice guidelines).
Intended for the purpose of supporting or providing recommendations to a health care professional about prevention, diagnosis, or treatment of a disease or condition.
Intended for the purpose of enabling such health care professionals to independently review the basis for such recommendations that such software presents so that it is not the intent that such health care professionals rely primarily on any of such recommendations to make a clinical diagnosis or treatment decision regarding an individual patient.
Of course, these criteria alone are not enough to help manufacturers understand where the boundaries are - and the CDS guidance is intended to help illustrate how the 4 criteria apply through explanations and examples.
Comparing the 2022 and 2026 versions shows 3 main changes in how FDA interprets the guidance. In short, they have:
Relaxed the interpretation of Criterion 3 to more explicitly permit single-recommendation outputs
Added more examples, some of which might affect the increasing use of large language models (LLMs) for summarisation
Made subtle changes to time-critical software functions
Criterion 3 is softened, allowing single-recommendation outputs
Criterion 3 ensures that multiple recommendations are put to healthcare professionals, allowing them to decide which course of action they would like to take.
The biggest change in the 2026 guidance is that FDA have decided to exercise enforcement discretion for single-recommendation outputs within criterion 3, provided that all other criteria are met. Many are interpreting this to mean that any kind of singular output is now permissible, but this is not the case. The Agency specifically placed a condition that enforcement discretion would apply “if only one option was clinically appropriate.”
In other words, if a system only provides one recommendation but other appropriate recommendations exist, it would still fail Criterion 3 and be a medical device. As a result, the best way to interpret this change is more subtle; FDA still expects a list of output clinically appropriate recommendations, but accepts that some lists may only be one item long if no other appropriate recommendations exist. Of course, there is still a grey area - how do manufacturers justify that there is only one clinically appropriate option?
The final point to note is that enforcement discretion doesn’t mean FDA exempts these tools from being regulated as medical devices - they still reserve the right to regulate them based on context, risk and intended purpose.
New examples which may affect LLM use-cases
FDA’s public comment on the release of the new CDS guidance specifically mentioned their pragmatism in the context AI - but the guidance itself does not. This is important - the CDS guidance makes no assessment on how tools work, but rather what they are used for. Nevertheless, some of the new examples in the guidance could impact LLM-driven use-cases.
The most pertinent example is provided in the context of single-recommendation criterion 3, which makes summarisation/impression generation of radiology reports permissible: “A software function that analyzes a radiologist’s clinical findings of an image to generate a proposed summary of the clinical findings for a patient’s radiology or pathology report, including a specific diagnostic recommendation based on clinical guidelines that should be reviewed, revised, and finalized by an HCP” would now fall under enforcement discretion rather than be actively regulated as a medical device.
FDA takes the summarisation use-case further by clarifying that the generation of “Patient data reports and summaries (e.g., discharge papers)” would be a non-device CDS function.
What remains unclear is whether broad-use products such as OpenEvidence are medical devices, CDS systems, or not devices at all. This is because the inputs and outputs for such products are largely unconstrained; there is nothing stopping users from entering a specific patient-related diagnostic question, and the LLM provides a single, specific diagnosis, which would count as a medical device, as it may not meet criterion 3.
Subtle changes to time-criticality
The FDA states that criterion 3 is met if a software product:
Provides condition-, disease-, and/or patient-specific information and options to an HCP to enhance, inform and/or influence a health care decision;
Does not provide a specific preventive, diagnostic, or treatment output or directive;
Is not intended to replace or direct the HCP’s judgment.
This 2026 version actually removes a fourth point that was present in the 2022 version: “Is not intended to support time-critical decision-making”. Many have taken this to mean that time-critical situations are no longer excluded from being non-device CDS, but this is wrong for two reasons.
First, several examples in the new guidance still explicitly state that use of CDS in time-critical situations would count as a medical device, for example, a “software function [that] predicts risk of a cardiovascular event in the next 24 hours… is a device software function that would remain the focus of FDA’s oversight.”
Second, they have simply shifted the time-critical exclusion to fit with criterion 4. The guidance now explains that criterion 4 (allowing clinicians to independently review the basis of any CDS recommendations) is unlikely to be met in time-critical scenarios.
Overall, that means FDA’s stance on time-criticality remains the same, but the emphasis on how it is applied has been moved in the document.
Open questions remain
The updated guidance does provide some relaxation of criterion 3 and more helpful examples, but there is still ambiguity on how these criteria may apply for more general-purpose AI use-cases. Additionally, some people were hoping there would be explicit guidance on patient-facing CDS (which was present in the 2019 edition of the guidance), but the guidance remains strictly applicable to CDS systems used by healthcare professionals only. It remains to be seen whether this new CDS guidance sets the tone for anticipated updates to FDA’s Policy for Device Software Functions and Mobile Medical Applications, which would have more impact on patient-facing products.
If your company is navigating complex scientific or regulatory pathways for a digital health product, Hardian Health can help. Get in touch to discover how our expertise can support your journey from concept to clearance.