top of page

Explore more Complizen Learn Articles

Coming Soon

Build your 510(k) with Complizen.

Go to market 60% faster.

Software as a Medical Device (SaMD) Explained: What It Is, FDA Requirements, and How to Get Cleared

  • Writer: Beng Ee Lim
    Beng Ee Lim
  • Apr 30
  • 19 min read

Software as a Medical Device (SaMD) is software intended for one or more medical purposes that performs those purposes without being part of a hardware medical device (IMDRF definition used by FDA). SaMD typically runs on general-purpose platforms like phones, tablets, PCs, or cloud servers. If your software’s intended use is to diagnose, treat, mitigate, cure, prevent disease, or drive clinical decision-making, it may be regulated as a medical device unless it fits an exclusion or falls under FDA’s low-risk general wellness policy.


In practice, SaMD classification and pathway depend on risk + claims. Many SaMD products are cleared through 510(k) when a suitable predicate exists; novel low-to-moderate risk software may fit De Novo, and higher-risk software may require PMA.


Software as a Medical Device (SaMD) Explained: What It Is, FDA Requirements, and How to Get Cleared

What Is Software as a Medical Device (SaMD)?


Software as a Medical Device (SaMD) is medical software where the software itself is the device—it performs a medical purpose without being part of a hardware medical device.



FDA and IMDRF Definition


FDA (via IMDRF) definition: “Software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device.” 


IMDRF definition: Same definition (SaMD is standalone medical-purpose software, typically running on general-purpose computing platforms).


Key characteristics:

  • Has a medical intended use (e.g., diagnosis, monitoring, treatment support)  

  • Runs on general-purpose platforms (phone, tablet, PC, cloud/server)  

  • Is not necessary for a hardware medical device to achieve its intended medical purpose (i.e., it’s not embedded “device software”)  

  • Performs the medical function through software logic/algorithms (with appropriate validation and change control)



How SaMD Differs from Traditional Medical Devices


Traditional medical device:

  • Physical hardware (infusion pump, pacemaker, surgical instrument)

  • Software may be a component, but the regulated product is primarily the hardware system

  • Updates can be slower to deploy because they often require validated service workflows and device fleet coordination


SaMD:

  • Software is the complete medical device

  • Runs on phones, tablets, computers, or cloud servers  

  • Updates can be distributed digitally, but still require controlled validation and (when applicable) regulatory change assessment


Example comparison:

  • Traditional device: Infusion pump (hardware) with embedded control software

  • SaMD-style example: A smartphone app that analyzes ECG captured using an external FDA-cleared ECG accessory (hardware electrodes) paired to the phone, providing rhythm analysis for review.





Real-World SaMD Examples Across Medical Specialties



Radiology (largest category for FDA-authorized AI/ML device software) 


Computer-Aided Detection (CAD) for mammography:


AI-powered CT analysis:



Cardiology (major SaMD category: ECG + ambulatory monitoring) 


ECG analysis algorithms:


Echocardiography analysis:

  • Automated cardiac function measurement

  • Calculates ejection fraction from ultrasound

  • Quantifies chamber dimensions



Ophthalmology


Diabetic retinopathy screening (autonomous AI):

  • Analyzes retinal images

  • Provides autonomous screening result (no ophthalmologist required to generate the initial output)

  • Example: IDx-DR (DEN180001)  


Macular degeneration detection:

  • Identifies wet AMD from OCT scans

  • Flags urgent referrals



Neurology


Stroke detection / triage algorithms:


Epilepsy monitoring:

  • Analyzes EEG data

  • Detects seizure patterns

  • Patient monitoring via wearable sensors



Mental Health and Digital Therapeutics


Prescription digital therapeutics:

  • Software delivering therapeutic interventions for medical conditions

  • Example: reSET (DEN160018) for substance use disorder  


Cognitive behavioral therapy apps:

  • Treats insomnia (example: Somryst)

  • Manages chronic conditions through behavioral interventions



Primary Care


Clinical decision support:

  • Sepsis prediction algorithms

  • Risk scoring for deterioration

  • Triage recommendations





SaMD vs SiMD: Understanding the Critical Distinction


Manufacturers often confuse Software as a Medical Device (SaMD) with software that’s part of a medical device system. That distinction affects classification, documentation, cybersecurity expectations, and whether software is reviewed as a standalone device or within a hardware submission.


Note: “SiMD” is common industry shorthand, while FDA often frames this area as device software functions (software functions that meet the device definition).


SaMD (Software AS a Medical Device)


Characteristics:

  • Software IS the medical device (standalone device software)  

  • Operates independently on general-purpose platforms (iOS, Android, web browsers, cloud servers)

  • Not embedded in or required to operate a specific hardware medical device  

  • Medical function performed primarily through software logic/algorithms


Examples:

  • Mobile app diagnosing skin lesions from phone camera photos

  • Cloud-based algorithm interpreting radiology images uploaded from PACS

  • Desktop software calculating radiation therapy dose plans

  • Web application providing treatment recommendations based on patient data


Regulatory implications (practical):

  • Independent device classification (Class I, II, or III) based on intended use and risk

  • Often submitted as its own premarket submission when the software itself is the device; sometimes reviewed within a system depending on configuration and claims  

  • Software changes may still require a new 510(k) depending on impact to safety/effectiveness  


Development focus:

  • Platform compatibility (OS versions, browsers, hardware variability)

  • UX and human factors (use error is a real risk driver)

  • Cloud infrastructure + cybersecurity

  • Interoperability with healthcare IT systems  



SiMD (Software IN a Medical Device) (industry term; FDA often discusses this as “device software functions” within a device system)


Characteristics:

  • Software embedded in or controlling hardware device functionality

  • Cannot function as intended without the specific hardware

  • Software + hardware form an integrated medical device system  


Examples:

  • Infusion pump control software

  • Pacemaker sensing and pacing algorithms

  • MRI scanner image reconstruction software

  • Surgical robot control systems


Regulatory implications:

  • Typically reviewed as part of the hardware device submission/package (device system context)  

  • Product code usually reflects the hardware device type and intended use

  • Software/firmware changes may require a new 510(k) depending on risk impact  


Development focus:

  • Hardware–software integration

  • Real-time performance requirements

  • EMC/environmental constraints

  • Update mechanisms for deployed hardware (field updates, service workflows)  



Why the Distinction Matters


For manufacturers:

  • Predicate selection: SaMD predicates are often other device software; SiMD predicates are usually device systems with similar hardware + software functions

  • Testing: SaMD leans harder on platform compatibility + algorithm validation; SiMD adds hardware-integration verification

  • Update strategy: SaMD can deploy faster operationally, but regulatory change control still applies and some changes require a new submission  


For regulators:

  • Risk: SaMD failures often affect information/decision support; SiMD can directly drive physical actuation and harm pathways

  • Cybersecurity: both need it, but attack surfaces differ (cloud/network vs device interfaces)  



Example highlighting the distinction (fixed)


Continuous glucose monitoring (CGM) ecosystem:

  • Device system component (often SiMD-style): sensor + transmitter + embedded firmware measuring glucose (hardware-dependent)

  • Software component (can be SaMD-like depending on claims): mobile app that displays trends, generates alerts, or provides decision support


Regulatory reality: these may be reviewed together as a system or separately, depending on how the software is intended to be used and what claims it makes (e.g., dosing recommendations vs display/monitoring).  


⚠️ If you’re unsure whether your software function behaves more like SaMD or “software within a device system,” the fastest way to de-risk is to map: intended use → likely product code → closest predicates → required software documentation expectations.

If SaMD vs device-software is fuzzy, Complizen can surface likely predicates + the exact FDA software evidence you’ll be held to in minutes





Is Your Software a Medical Device? The 3-Question Test


Not all healthcare software is regulated as a medical device. Use this test to determine whether your software is likely a regulated device software function and whether it may qualify as SaMD.



Question 1: Is Your Software Intended for a Medical Purpose?


FDA’s core trigger is intended use: is the software intended for diagnosis, cure, mitigation, treatment, or prevention of disease (or otherwise meets the device definition)?  

If NO → Software is likely NOT a medical device


Examples of software that are typically not devices (when they make no medical claims):  

  • General medical reference apps (textbooks, drug databases)

  • Medical education platforms

  • Healthcare administrative tools (billing, scheduling)

  • Telehealth video communication platforms


If YES → Continue to Question 2



Question 2: Does Your Software Interpret Data or Provide Clinical Recommendations?


Key distinction: is the software interpreting data and generating a clinical conclusion/recommendation, or is it mainly storing/displaying/transmitting information?  


Just displaying/storing (generally NOT a medical device):  

  • EHR systems displaying patient data without interpretation

  • Medical image storage/transmission (PACS) without analysis

  • Patient portals showing lab results as reported by the lab


Providing interpretation/recommendations (often IS a medical device):  

  • Algorithm analyzing ECG and detecting arrhythmia

  • Software detecting pneumonia in chest X-rays

  • App calculating insulin dose recommendations based on glucose + meal data

  • System flagging high-risk patients for clinical intervention


If it only stores/displays → Likely NOT a medical device


If it interprets/analyzes/recommends → Continue to Question 3



Question 3: Is It Standalone Software (SaMD) or Software in a Device System?


If your software has a medical intended use:

  • Standalone on general-purpose platforms → it may be SaMD  

  • Embedded in / controlling medical hardware → it’s typically a device software function within a device system (often called “SiMD” informally)  


Independent (SaMD):

  • Mobile app on standard iPhone/Android phone

  • Cloud-based algorithm accessible via web browser

  • Desktop software on Windows/Mac computers

  • Server-side algorithm processing uploaded data


Embedded (not SaMD):

  • Software in pacemaker

  • Infusion pump control algorithms

  • MRI scanner firmware

  • Surgical robot control system



Test Examples


Example 1: Fitness tracker counting steps

Q1: Medical purpose claimed? → NO (assuming wellness-only claims)

Result: NOT a medical device


Example 2: App analyzing mole photos to detect melanoma

Q1: Medical purpose? → YES

Q2: Diagnostic interpretation? → YES

Q3: Standalone on phone? → YES

Result: Likely SaMD (FDA review typically needed)


Example 3: Clinical decision support showing drug–drug interaction warnings

Q1: Medical purpose? → YES

Q2: Supports clinical decisions? → YES

Q3: Standalone? → YES

Result: May be non-device CDS only if it meets the statutory criteria (including enabling the HCP to independently review the basis and not rely primarily on the recommendation).


If you’re on the fence, Complizen can help you quickly map intended use → likely product code/predicate set → the FDA software evidence checklist so you don’t guess wrong.




Clinical Decision Support Exemption: When Software Is NOT a Medical Device


The 21st Century Cures Act (Section 3060) created a narrow exemption for certain clinical decision support (CDS) functions. FDA refers to this as the “non-device CDS” criteria in FD&C Act §520(o)(1)(E). To be excluded from FDA’s device definition, your software must meet all four criteria.



The Four Criteria (ALL Must Be Met)



Criterion 1: NOT intended to acquire, process, or analyze medical images or signals


This includes medical images, signals from in vitro diagnostics (IVDs), or patterns/signals from a signal acquisition system (e.g., ECG waveforms). If your software analyzes any of these to drive a medical purpose, it generally does not qualify for the CDS exemption.


Examples that fail Criterion 1:

  • ECG analysis algorithm (analyzes waveform patterns)

  • Retinal image screening for diabetic retinopathy (analyzes images)

  • Dermatology app analyzing skin photos (analyzes images)



Criterion 2: Intended to display, analyze, or print medical information


This can be patient-specific information (e.g., labs, meds, problems) and/or other medical information (e.g., clinical practice guidelines, drug reference information).



Criterion 3: Intended to support or provide recommendations to a health care professional (HCP)


The function must be intended for HCP use (not patients/caregivers) for prevention, diagnosis, or treatment decisions.



Criterion 4: Intended to enable the HCP to independently review the basis for the recommendations


This is the most misunderstood requirement. FDA’s focus is whether the clinician can see the key inputs + rationale (e.g., guideline citations, rule logic, patient factors used) and independently evaluate the recommendation—so they don’t “primarily rely” on the software. Black-box risk scores without a reviewable basis often fail here.


Practical tip: In Complizen, teams typically map each CDS criterion to the exact FDA guidance section while drafting their regulatory rationale—so you can defend “non-device CDS” claims with traceable citations (instead of vibes).  



CDS Exemption Examples



Likely qualifies (NOT regulated): Drug–drug interaction checker

  • Pulls meds from the EHR

  • Flags known interactions from a referenced database

  • Shows the interaction mechanism/source so the clinician can verify

  • HCP makes the final prescribing decision  


Often does NOT qualify (IS regulated): Sepsis prediction / deterioration risk tool

Many sepsis predictors fail because the clinician cannot independently review the basis of the risk score (opaque model features/weights, no transparent rationale), and/or the function crosses into FDA’s “analyze” category depending on inputs and claims.  


Does NOT qualify (IS regulated): AFib detection from watch ECG

An AFib algorithm analyzes an ECG signal and outputs a diagnostic conclusion, so it does not meet the non-device CDS criteria.  



Common CDS Exemption Mistakes


Mistake 1: “We provide decision support, so we’re exempt.”

Reality: The exemption is narrow. Missing one criterion can make the function a regulated device.


Mistake 2: “Physicians review our output, so it doesn’t replace judgment.”

Reality: FDA’s real question is whether the clinician can independently review the basis of the recommendation.


Mistake 3: “We use guidelines, so we’re exempt.”

Reality: If you apply patient-specific analysis in a way that doesn’t allow independent review of the basis, you can still fail Criterion 4.





FDA Classification for SaMD: Class I, II, or III


SaMD is regulated under FDA’s existing medical device classification system (Class I, II, or III). Classification is driven by your software’s intended use and the risk if the software output is wrong, not by the fact that the product is “software.”  


Important nuance: “AI/ML-enabled” and “SaMD” overlap, but they are not the same category. Use AI/ML statistics to describe the AI landscape, not to “prove” SaMD class distribution.



Class I SaMD: Lower Risk


Some medical device software may fall into Class I when the risk of harm from incorrect output is low, and the product fits an exempt classification. Many software products still land in Class II, so you should confirm classification by product code + regulation number in FDA’s Product Classification Database.  



Class II SaMD: Most Common Pathway for Many Software Products


A large share of regulated medical software products are Class II and go through 510(k). For example, in a 2024 analysis of FDA-authorized Class II ML-enabled devices, 94.6% were cleared via 510(k). This does not define all SaMD, but it shows how often FDA uses 510(k) for moderate-risk ML-enabled products.  



Class III SaMD: Highest Risk


Class III is reserved for software where failure could reasonably cause death or serious injury, and where general and special controls are insufficient. If you think your software may be Class III, confirm early using FDA classification records and consider a Pre-Sub or 513(g) classification request (depending on the situation).



How to Confirm Your SaMD Class (fastest practical workflow)

  1. Identify your likely product code and 21 CFR regulation number in FDA’s Product Classification Database.

  2. Cross-check recent predicates in the 510(k) database for similar intended use and function.

  3. If it’s ambiguous, get FDA feedback (Pre-Sub) or a formal classification determination (513(g)).


If you want to reduce “classification guesswork,” a tool like Complizen can speed up steps (1) and (2) by mapping your description to likely product codes and pulling the FDA source records, but you should still confirm the final classification against FDA’s database before you ship anything.





IMDRF SaMD Risk Categories: Understanding I, II, III, IV Framework


The International Medical Device Regulators Forum (IMDRF) created a risk categorization framework specifically for SaMD. This framework is not the same as FDA’s Class I, II, III system, but it helps teams estimate how much evidence regulators will expect.  



Two Risk Factors


Factor 1: Significance of Information Provided

  • Inform: Provides information to inform clinical management

  • Drive: Information drives clinical management decisions (aids decision-making)

  • Diagnose/Treat: Software diagnoses or treats, or directly drives treatment decisions


Factor 2: State of Healthcare Situation or Condition

  • Non-serious condition: Typically temporary or reversible

  • Serious condition: Long-term consequences, not immediately life-threatening

  • Critical condition: Life-threatening or risk of permanent impairment



IMDRF Category Examples (rule of thumb)


  • Category I (low impact): Informing non-serious or serious conditions, or driving non-serious decisions (for example, trend displays that support clinician review).

  • Category II (medium impact): Informing critical situations, or diagnosing/treating non-serious conditions (for example, tools that calculate a known score or recommendation for a non-serious condition, depending on claims).  

  • Category III (high impact): Driving decisions for critical conditions, or diagnosing/treating serious conditions (for example, stroke triage alerts or diagnostic support where the condition is serious).  

  • Category IV (highest impact): Diagnosing or treating critical conditions (for example, autonomous diagnosis or software that directly drives life-critical therapy decisions).  



IMDRF vs FDA Classification Relationship


Critical: IMDRF categories are not equivalent to FDA Classes.


  • IMDRF category helps estimate evidence rigor (how strong your analytical and clinical validation must be).  

  • FDA class and product code determine the US regulatory pathway (510(k), De Novo, PMA).  


Example (kept conservative): An AF notification or detection feature is often regulated as Class II software functionality in the FDA classification system, but the exact product code and requirements depend on the intended use and claims.





What FDA Requires for SaMD 510(k) Clearance


Many SaMD products, especially Class II software functions, are reviewed through the 510(k) pathway, and the fastest way to avoid delays is to ship a submission package that is complete, consistent, and evidence-linked.



Device Description Documentation


Intended use statement:

  • Medical purpose of the software

  • Target patient population

  • Clinical condition addressed

  • Example: “To detect atrial fibrillation in adults using single-lead ECG data.”


Indications for use:

  • Specific clinical scenarios where the software is used

  • User population (clinicians, trained staff, or patients)

  • Key limitations and contraindications


Operating environment:

  • Computing platforms (iOS/Android versions, browsers, OS)

  • Minimum hardware specs

  • Network connectivity assumptions (especially for cloud processing)


System architecture diagram:

  • Data flow from input → processing → output

  • Major components (mobile, cloud, APIs)

  • External integrations (EHR, PACS, devices)


Practical tip: tools like Complizen can help teams keep this “device description to evidence” chain tight by organizing product code context, relevant FDA guidance, and supporting artifacts in one workspace, so your submission stays consistent section-to-section.



Software Lifecycle Documentation and V&V (commonly aligned to IEC 62304)


FDA recognizes IEC 62304 as a consensus standard for medical device software lifecycle processes, and many teams align their software lifecycle evidence to it.


Software Requirements Specification (SRS):

  • Functional requirements

  • Performance requirements (accuracy, latency where relevant)

  • Interface requirements (inputs, outputs, UI)

  • Data requirements (formats, validation rules)


Software Design Specification (SDS):

  • Architecture (modules/components)

  • Algorithm description (including AI/ML model behavior where applicable)

  • Interfaces and data handling


Verification and Validation (V&V):

  • Unit, integration, and system testing

  • Validation tied to intended use and real-world workflow


Traceability matrix:

  • Requirements → design → implementation → tests → results

  • Make it easy for FDA to see coverage and closure.  


Configuration management and version control:

  • Versioning scheme

  • Change control

  • Release notes tied to risk and validation scope



Clinical and Analytical Validation


Performance metrics:

  • Sensitivity, specificity, PPV/NPV (as applicable)

  • Subgroup performance where clinically relevant


Validation dataset:

  • Independent from training data (for AI/ML)

  • Clinically representative cases

  • Statistically justified sample size (avoid “one-size-fits-all” minimums)


Ground truth:

  • Clear reference standard

  • Expert adjudication process if subjective labeling is involved


Analytical vs clinical evidence:

  • Many Class II SaMD rely heavily on analytical validation, but FDA may still expect clinical evidence depending on claims, risk, and predicates.  



Cybersecurity (FDA guidance, and what reviewers actually look for)


FDA’s cybersecurity expectations include a security risk management story that is auditable and complete, and for applicable products, an SBOM as part of the submission content.  


Threat modeling:

  • Attack surfaces (network, APIs, cloud, UI)

  • Plausible threat actors and abuse cases


Security risk assessment:

  • Likelihood, impact, mitigations, residual risk rationale


SBOM:

  • Machine-readable and industry-accepted, common formats include SPDX or CycloneDX  


Security controls:

  • AuthN/AuthZ, encryption in transit/at rest

  • Audit logging, secure update mechanisms

  • Vulnerability intake and patching process



Usability and Human Factors


Use-related risk analysis:

  • Foreseeable use errors and harm severity

  • Mitigations via UI, labeling, workflow constraints


Usability validation:

  • Often includes ~15 representative users per distinct user group, but the right number depends on your user groups and critical tasks.  



Interoperability Testing


Data standards (as applicable):

  • HL7 FHIR (EHR), DICOM (imaging), relevant IEEE signal standards


Integration and edge cases:

  • Missing, malformed, out-of-range data

  • Network failures and degraded modes

  • Safe handling of incorrect inputs





SaMD Testing Requirements: What You Must Demonstrate


Testing requirements vary by SaMD classification and intended use, but most FDA-cleared SaMD must demonstrate clinical performance, software verification and validation, cybersecurity, platform compatibility, and usable, safe operation.



Performance Testing


Algorithm performance and clinical relevance

  • Test on a clinically representative dataset that matches your intended use, indications, and use environment.

  • Sample size should be statistically justified based on prevalence, endpoints, and the risk of false positives and false negatives, often requiring hundreds or more cases for imaging AI, depending on the claim.  

  • Document performance failure modes, including false positives, false negatives, and edge cases.


Demographic and subgroup performance

  • Evaluate performance across clinically relevant subgroups, and document limitations. This is increasingly expected under good ML practice principles and broader regulatory expectations for representative data and robust evaluation.  


Comparison to predicate (510(k) context)

  • When feasible, support substantial equivalence using comparative testing, benchmarks, or clinically justified reference performance targets, and explain your statistical approach.  


Practical tip

  • Tools like Complizen help teams stay organized by mapping the testing evidence you have to the specific FDA expectations and guidance you’ll be judged against, so you do not discover “missing proof” only after an AI request.



Cybersecurity Testing


For SaMD or connected devices, FDA expects security to be addressed through security risk management and security testing evidence appropriate to your threat model.


Typical evidence includes:

  • vulnerability scanning and penetration testing (often performed by qualified third parties),

  • verification of encryption and access controls,

  • SBOM documentation and vulnerability management procedures.  



Platform Compatibility Testing


FDA does not mandate specific OS versions or phone models, but it does expect you to:

  • define your supported operating environment (OS versions, browsers, hardware constraints) in your documentation and labeling,

  • validate performance in that supported environment,

  • maintain a change-control and update strategy when platforms evolve.  



Usability Testing


Usability evidence should show intended users can use the SaMD safely and effectively in representative scenarios.

  • Summative validation commonly targets around 15 participants per distinct user group, adjusted based on risk, user diversity, and critical tasks.  

  • Focus scenarios on critical tasks, error recovery, and foreseeable misuse.



Regression Testing


Because SaMD changes frequently, you need a defensible approach to showing updates do not break safety or performance.

  • Use risk-based regression testing tied to changes in clinical logic, risk controls, cybersecurity, and interfaces.

  • Trigger broader re-validation when changes could alter clinical performance or safety-critical behavior. 





SaMD Regulatory Pathways and Timelines



510(k) Premarket Notification (Most Common)


When 510(k) applies:

  • Class II SaMD (most software)

  • Some Class I SaMD (if not exempt)

  • Substantially equivalent predicates exist


Process:


Month 1-3: Development and internal testing

  • Build software to production quality

  • Start IEC 62304 documentation

  • Draft intended use and indications early (you will refine later)


Month 4-5: Validation testing

  • Clinical or analytical validation (depends on risk and claims)

  • Cybersecurity testing

  • Usability testing

  • Platform compatibility testing


Month 6: 510(k) preparation

  • Device description and architecture diagrams

  • Substantial equivalence comparison to predicates

  • Test reports and traceability matrix

  • Labeling (IFU, warnings, limitations)

  • Compile in eSTAR format (required for 510(k) submissions)  


Month 7: Submission

  • Submit via eSTAR

  • Pay fee: $26,067 standard, $6,517 small business (FY2026)  


Month 7-12: FDA review

  • Day 1-15: Acceptance review

  • Day 15-90: Substantive review

  • If Additional Information requested, timeline extends based on your response cycle

  • Typical total time in practice often exceeds the statutory goal due to interactive review cycles


Month 12: Clearance

  • Receive clearance letter with K-number

  • Register establishment ($11,423 annual fee, FY2026)  

  • List device in FURLS

  • Begin commercial distribution


Total timeline: 6-12 months

Total cost: $75,000-$200,000+ (depends heavily on validation scope and cybersecurity)


Teams use Complizen to find predicates, confirm product codes, and map required standards fast, so your 510(k) plan is grounded before you burn weeks on the wrong path.



De Novo Classification (Novel Low-Moderate Risk SaMD)


When De Novo applies:

  • No substantially equivalent predicate exists

  • Software is low-moderate risk (would be Class I or II if classified)

  • Novel intended use or novel technology


Process:

  • Similar evidence package to a strong 510(k), but FDA is also defining the classification and controls

  • FDA’s MDUFA performance goal is a decision within 150 FDA days for 70% of De Novo requests  


Timeline: commonly ~9-18 months end-to-end (build + evidence + FDA cycle), varies widely by device and questions

Cost: typically higher than a 510(k) because you are creating the “first-of-type” playbook



Premarket Approval (PMA) for High-Risk SaMD


When PMA is required:

  • Class III SaMD (life-critical software)

  • No viable 510(k) pathway


Process:

  • Often requires clinical evidence demonstrating safety and effectiveness

  • Deep manufacturing and quality review expectations still apply, even for “software-only” companies

  • Longer FDA cycle with higher evidence burden


Timeline: typically multi-year

Cost: typically $1M+ all-in (often much higher), driven by clinical and validation scope





Common SaMD Regulatory Mistakes



Mistake 1: “It’s Just an App, Not a Medical Device”


Problem: Developers underestimate regulatory scope because the product is “software.”


Reality: Platform does not matter.


Intended use determines whether it is a medical device.


Example:

  • Wellness: “Track your heart rate during exercise.”

  • Medical device: “Detect atrial fibrillation.” → Diagnostic claim, regulated SaMD.


Fix: Evaluate intended use, not the platform. If the software diagnoses, treats, prevents, monitors disease, or informs clinical decisions, assume it is regulated until proven otherwise.



Mistake 2: “We’re Clinical Decision Support, So We’re Exempt”


Problem: Teams treat the CDS exemption like a blanket carve-out.


Reality: The CDS exemption is narrow, and most products fail at least one condition.


Common failure modes:

  • Analyzes ECG or imaging → fails the exemption criteria

  • Patient-facing → fails the exemption criteria

  • Interprets patient data to output a novel conclusion → fails the exemption criteria


Fix: Check the CDS exemption requirements carefully, and document your rationale. If you are applying patient-specific logic to produce a clinical conclusion, you are almost always regulated.



Mistake 3: “Wellness Apps Aren’t Regulated”


Problem: Teams assume “health” language is always safe.


Reality: The line is claims.


Not regulated (typically):

  • Steps, sleep, meditation, general fitness trends


Regulated (typically):

  • Disease claims (AFib, sepsis, pneumonia, diabetes dosing)

  • Diagnosis, treatment recommendations, screening, clinical risk stratification


Fix: If you want to stay wellness, avoid disease claims and clinical decision language entirely.



Mistake 4: “We Can Update Software Freely After Clearance”


Problem: Treating post-clearance updates like normal SaaS shipping.


Reality: Some updates can trigger a new submission if they impact intended use, safety, or performance. FDA’s “when to submit” framework is the anchor here.


Examples that often trigger regulatory action:

  • New disease/indication

  • Major algorithm change (especially ML model replacement)

  • Expanded patient population (adult to pediatric)

  • New platform inputs that change performance risk


Examples that usually do not:

  • Bug fixes

  • UI polish

  • Security patches that do not change clinical behavior


Fix: Maintain a disciplined change-control process and document impact assessments per FDA expectations.



Mistake 5: “EU CE Mark Equals FDA Clearance”


Problem: Assuming EU authorization transfers to the US.


Reality: CE Mark and FDA clearance are separate regulatory systems with different evidentiary expectations.


Fix: Use EU work as leverage (clinical data, risk files, QMS maturity), but plan for an FDA-native submission strategy.





AI/ML SaMD: Special Considerations


FDA maintains a public list of AI-enabled devices authorized for marketing in the US.  



Predetermined Change Control Plan (PCCP)


What it is: A structured way to pre-specify certain ML changes so you are not forced into a brand-new submission for every update. FDA has issued PCCP guidance.


PCCP typically includes:

  1. Description of modifications (what you will change)

  2. Modification protocol (how you will validate changes)

  3. Impact assessment (risk controls, monitoring, rollback)


Practical tip: If you are going to ship ML updates regularly, plan PCCP early, not after you have scaled.


Tools like Complizen help you map your SaMD to the right FDA expectations, including the specific guidance, standards, and submission artifacts that tend to get flagged in review, so your team is not reverse-engineering requirements from scratch.



Algorithm transparency and validation discipline


Even when you have predicates, FDA expects clear documentation of:

  • Intended use boundaries

  • Validation dataset characteristics

  • Performance metrics and limitations

  • Subgroup performance where relevant


This is where most teams underestimate effort, especially if they are used to “model cards” that are not submission-grade.



Post-market performance monitoring


For ML systems, you should be ready to show:

  • Drift monitoring plan

  • Trigger thresholds for retraining or rollback

  • Complaint and signal triage workflow

  • Security and vulnerability management (especially for networked systems)  





The Fastest Path to Market




Frequently Asked Questions


Is my mobile health app a medical device?

If it diagnoses, treats, or guides clinical decisions, yes. If it is wellness only (steps, fitness, general health) with no disease claims, usually no.


What’s the difference between SaMD and SiMD?

SaMD is standalone software (phone, web, cloud). SiMD is software inside hardware (pump, pacemaker). SaMD is cleared as its own device, SiMD is reviewed as part of the hardware device.


How much does SaMD clearance cost?

Typical Class II 510(k): $75k–$200k all-in. Class III PMA: $1M–$5M+ (mostly because of clinical trials).


How long does SaMD 510(k) clearance take?

Usually 6–12 months. AI requests can add a few months if your submission is incomplete.


Can I update my SaMD after clearance?

Small fixes, yes. Big changes (new indications, major algorithm changes, new patient population) often need a new 510(k). PCCPs can help for AI/ML updates if you planned them up front.


Do I need clinical trials for SaMD?

Often no for Class II. Many SaMD clear with analytical validation on datasets. Trials are more common for high-risk or novel software.


What is the clinical decision support exemption?

A narrow carve-out. To qualify, it must be HCP-facing, not analyze images/signals, not replace judgment, and not interpret patient data into a new clinical conclusion.


IMDRF risk category vs FDA class?

IMDRF categories guide evidence rigor globally. FDA Class I/II/III determines the US pathway. They are not 1-to-1.


What testing is usually required for SaMD 510(k)?

IEC 62304 V&V + traceability, clinical/analytical validation, cybersecurity (threat model + SBOM), usability (15+ users), and platform compatibility.


Can I sell internationally with FDA clearance?

No. FDA is US-only. EU, Canada, Australia, Japan each require their own approvals, FDA clearance can help but does not replace them.

Never miss an update

By subscribing, you agree to receive updates from Complizen Learn. Unsubscribe anytime.

2x Faster 510(k)'s with AI

Automatic document drafting and eSTAR mapping, expert review. 

bottom of page