Impact of the New FDA Guidance (December 2025)
What FDA’s AI-Enabled Device Guidance Means for Human Factors Engineering
by Katie Curtis
Artificial intelligence (AI) is transforming medical devices – powering everything from diagnostic support tools to automated monitoring systems. But as AI-enabled functions become more common, they also introduce new risks compared to traditional devices. To provide guidance on the lifecycle management of AI-enabled devices and delineate the information that should be included in marketing submissions, FDA released the draft guidance, “Artificial Intelligence–Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations” (January 2025).
While not a human factors (HF) guidance, it has significant implications for how manufacturers must evaluate usability, information interpretation, and safe decision-making for AI-enabled products.
Below is our perspective on what this guidance means for HF engineering and how development teams can prepare.
AI Introduces New Categories of Use-Related Risk
AI-enabled devices require users to complete complex, knowledge-based tasks during device use. Users must interpret AI outputs, understand their meaning, and decide on their application in the context of clinical decision-making.
FDA calls special attention to risks related to:
Understanding AI-generated outputs
Correctly interpreting AI recommendations
Knowing when to trust – and when to question – an algorithm
Recognizing unexpected or unsuitable results
Avoiding overreliance on the algorithm (automation bias, or “blind trust”)
Accessing information needed to use the system safely
While some AI-enabled devices are diagnostic, others are intended for use as adjunctive tools to support clinical decision-making. Since clinicians are making diagnosis and treatment decisions based on their interpretation of AI outputs, misinterpretation or misuse of information is considered a core safety issue for AI-enabled devices. As a result, HF engineering is expected to help identify, manage, and evaluate risks involving information interpretation, cognitive workload, and clinical decision-making.
The User Interface as a Transparency Enabler
User interface (UI) design has always been central to device safety. For AI-enabled devices, however, the UI becomes an even more critical risk control because it shapes how users understand:
The device’s purpose
The meaning of AI outputs and recommendations
Performance characteristics and uncertainty
Limitations and intended use
Operational steps required for safe and effective use
FDA emphasizes that transparent communication about AI is essential – especially when algorithms exhibit a degree of opacity that users cannot independently verify. UI design must give users the right information at the right time in a format suited to their needs, training, and the clinical environment.
Validation Requirements
FDA emphasizes a two-pronged approach to validating AI-enabled devices: performance validation ensures the model works as intended, while HF validation ensures users can operate the device and correctly interpret AI outputs. Both types of data are necessary because even highly accurate models can cause serious harm if users misunderstand outputs or make incorrect decisions based on them.
For traditional devices, HF validation is only required if a device has one or more critical tasks. A critical task is a user task which, if performed incorrectly or not performed at all, would or could cause serious harm to the patient or user, where harm is defined to include compromised medical care.1 Notably, FDA indicates that usability evaluations are required for all AI-enabled devices – regardless of whether the device has traditional critical tasks – because misinterpretation of outputs may pose safety risks. Such usability evaluations should focus on whether users can find, understand, interpret, and correctly apply AI-related information to support clinical decision-making.
Conclusion
FDA’s draft AI guidance underscores a central truth: the safety and effectiveness of AI-enabled devices depend as much on human understanding as on algorithmic performance. As AI introduces new cognitive and interpretive risks, HF engineering becomes both a regulatory requirement and a strategic necessity.
Manufacturers who invest early in HF are better positioned to design interfaces that promote transparency, support accurate user interpretation of AI outputs, and generate the usability evidence FDA expects for AI-enabled devices.
At Design Science, we partner with development teams to apply HF engineering to the unique challenges of AI – helping manufacturers design intuitive interfaces, evaluate real-world decision-making, and build safer, more effective AI-enabled products.
If your team is developing an AI-enabled medical device, now is the time to build a strong HF strategy. We can help you navigate the FDA AI guidance, plan appropriate usability studies, and design with transparency, trust, and safety in mind.
[1] FDA Final Guidance, Applying Human Factors and Usability Engineering to Medical Devices (2016)
Share this entry