10 Reasons FDA Submissions Fail — Even When You Think You're Ready
- Beng Ee Lim
- 4 days ago
- 18 min read
FDA 510(k) submissions most often stumble on preventable issues: incomplete or non-compliant admin packets at Refuse to Accept (RTA) review, weak or inconsistent device descriptions and indications for use, poor predicate selection, missing or insufficient testing data, and failure to follow device-specific FDA guidance. Historically, RTA refusal rates have climbed as high as ~60% of new 510(k)s, and even in recent years roughly one-third of submissions hit an RTA hold at least once. On top of that, around two-thirds of 510(k)s receive an Additional Information (AI) request during substantive review. Many of these delays are avoidable with disciplined use of the FDA RTA checklists, eSTAR, and well-scoped Pre-Sub (Q-Sub) meetings with FDA reviewers.

You spent 12 months preparing your FDA submission. Hired consultants. Triple-checked the documentation. Then, within 15 days, you receive an email from FDA: your 510(k) never even made it to substantive review, it has been placed on RTA Hold due to basic administrative and content gaps.
Here is the reality: historically, up to ~60% of newly filed 510(k)s have been refused at the RTA stage in some years, and more recent FDA data still show around 30% of submissions placed on RTA hold. Even after passing RTA, about two-thirds of 510(k)s receive an AI request, which pauses the review clock and forces another response cycle. Each of these touchpoints can add months of delay. FDA gives you up to 180 days to respond to an RTA or AI letter, and even “fast” teams often lose 30–60 days per cycle between analysis, new testing, and rewriting. That easily translates into tens of thousands of dollars in additional testing, consulting, and opportunity cost for every major delay.
This guide breaks down the 10 most common ways FDA 510(k) submissions go off the rails, and, more importantly, how to avoid them so you can get to “Substantially Equivalent” faster and with fewer surprises.
1. Administrative Incompleteness (60% of Submissions Fail Here)
The Problem:
The FDA’s Refuse to Accept (RTA) review is not a simple paperwork check. It is a structured, criteria-driven evaluation using detailed RTA checklists that contain dozens of acceptance criteria across device description, indications for use, labeling, performance testing, biocompatibility, cybersecurity, sterilization, and more. Missing or inadequately addressing any required item can trigger an RTA hold.
Historically, when the RTA program was first introduced, up to ~60% of new 510(k)s were refused at the RTA stage. Today, more recent FDA data show that approximately one-third of submissions still receive an RTA hold at least once. In short, RTA remains one of the most common and preventable causes of delay.
Real-World Examples of RTA Triggers (Industry Cases):
These are representative industry examples frequently cited by seasoned regulatory consultants:
Biocompatibility gap: A condom manufacturer submitted biocompatibility data but failed to specify whether patient contact duration was 24 hours, leaving reviewers unable to determine the correct ISO 10993 endpoints. Result: RTA Hold.
Pagination error: A submission included two copies of page 75 instead of pages 75 and 76. Because the missing page contained required content, reviewers issued an RTA Hold.
Incorrect device name: A company used the product code name instead of the official classification regulation name, creating ambiguity about the legally marketed predicate. Another RTA Hold.
These issues are small but fatal at the RTA stage.
How to Prevent Administrative Failure:
Download the correct FDA RTA checklist for your 510(k) type (Traditional, Special, Abbreviated, or via eSTAR).
Conduct a line-by-line gap analysis, noting exactly where each required item appears in your submission.
Use FDA eSTAR if eligible, which replaces the traditional RTA process and prevents many structural mistakes automatically.
Provide justification for all “Not Applicable” items rather than leaving them blank. FDA expects rationale, not omission.
Cost of Failure:
An RTA hold stops your submission before the FDA review clock even begins. Sponsors typically need 30–60+ days to reassemble documents, clarify missing elements, generate rationale, or fix formatting inconsistencies. For many companies, this delay means tens of thousands of dollars in additional consulting time, testing, or lost time-to-market.
2. Inadequate Device Description
The Problem:
If the FDA reviewer cannot clearly understand what your device is, what it does, and how it works, they cannot determine whether your testing strategy appropriately supports substantial equivalence. Device Description deficiencies are one of the most frequently cited issues in 510(k) Additional Information (AI) requests.
A strong description is foundational. Almost every downstream decision—predicate comparison, risk assessment, biocompatibility endpoints, performance testing—depends on it.
What FDA Expects to See:
A complete 510(k) Device Description typically includes:
What the device is: physical description, materials, dimensions, components
What it does: intended function and mechanism of action
How it works: operational principles, energy sources, workflow steps
Who uses it and where: patient vs. clinician, home vs. clinical setting
These elements appear explicitly in the FDA 510(k) Acceptance Checklist.
Why Submissions Fail Here
Most device descriptions fail for one of two reasons:
Too vague: e.g., “a surgical instrument for orthopedic procedures”
Too technical: dense engineering specs without context, making it unclear how the device functions or what testing is appropriate
Reviewers must understand the device well enough to judge its technological characteristics and testing strategy. Anything unclear generates an AI request.
Prevention
Write for a scientifically literate reviewer outside your specialty, not a product engineer.
Use diagrams, labeled photos, or system architecture illustrations to anchor explanations.
Ensure the description aligns exactly with your Indications for Use—reviewers check for consistency.
Include operational steps or workflow summaries when relevant.
A clear, structured description reduces ambiguity and prevents one of the most common FDA objections.
Complizen automatically retrieves and writes device descriptions from cleared predicate devices, helping you see how FDA expects similar devices in your category to be described. This provides a real-world benchmark for tone, structure, and level of detail.
3. Inconsistent Indications for Use
The Problem:
Your Indications for Use (IFU) must be internally consistent everywhere they appear in your submission. FDA reviewers compare the wording across FDA Form 3881, the 510(k) Summary, labeling, and any narrative descriptions. Even small wording differences—synonyms, reordered clauses, missing qualifiers—create uncertainty about intended use and trigger questions.
Real-World Example (Industry Case):
A sponsor described its device as:
“for monitoring heart rate during physical activity” in one section, and
“for tracking cardiac activity during exercise” in another.
Although the phrases were clinically equivalent, FDA issued an AI request because the wording was not identical, raising concerns about whether the intended use had shifted.
Why This Happens:
Small teams often rephrase sentences while editing different sections, not realizing the FDA treats the Indications for Use as a tightly controlled, legally meaningful statement. Any deviation signals possible changes in:
Target population
Use environment
Duration of use
Intended user (patient vs. provider)
Physiological parameter measured
Prescription vs. OTC status
If two versions differ even slightly, FDA assumes the device’s intended use may not be consistently represented.
Prevention:
Create a master Indications for Use statement as a controlled text block.
Copy/paste this exact wording into FDA Form 3881, the 510(k) Summary, labeling, and all narrative sections.
Use a structured comparison framework when evaluating predicates, such as: OTC vs. prescription, intended user, patient population, condition/illness, duration of use, environment of use, and anatomical site.
Perform a full-document text search (PDF search) to verify the IFU appears identically everywhere.
Ensure the IFU wording aligns cleanly with your Device Description and Technological Characteristics.
Cost of Failure:
An IFU inconsistency almost always results in an AI request, which pauses the FDA review clock. Sponsors typically spend 30–60+ days revising documents, updating labeling, and re-harmonizing the submission. The FDA allows up to 180 days for a complete response.
4. Wrong Predicate Selection
The Problem:
Choosing the wrong FDA 510(k) predicate device is one of the most expensive mistakes you can make in a submission. If your predicate has a different intended use or different technological characteristics that raise new questions of safety or effectiveness, the likely outcome is a Not Substantially Equivalent (NSE) decision.
An NSE forces you to pivot to a new predicate, redesign the testing strategy, or move into the De Novo or PMA pathway. In practice, that can easily consume 6–18 months and tens to hundreds of thousands of dollars in additional testing, consulting, and internal rework.
FDA’s own guidance notes that while NSE findings due to new intended use are relatively rare, about 10 percent of all NSE decisions are due to a new intended use, which automatically makes the device ineligible for 510(k) clearance.
What Makes a Predicate “Wrong”
A predicate device is likely inappropriate if:
It is not legally marketed. For example, it was never properly cleared, has been withdrawn for safety reasons, or cannot be confirmed as a legally marketed device in FDA’s database.
It does not have the same intended use. FDA requires the same intended use as the predicate for a substantial equivalence finding. “Similar” is not enough if your indications create a new intended use.
Its technological characteristics raise new safety or effectiveness questions. If design differences create new questions that are not present for the predicate, FDA may issue an NSE, even if the indications for use look similar.
Real-World Style Examples:
These are representative scenarios based on how FDA applies the substantial equivalence framework:
🩹 Different intended use masked as similar language: A wound care device selected a predicate indicated for the treatment of chronic wounds, but the new device was positioned for prevention of pressure ulcers in at-risk patients. Prevention in a prophylactic population is a different intended use from treatment of existing chronic wounds, so FDA would likely consider this new intended use and issue an NSE.
📉 Predicate not clearly legally marketed: A sponsor chose a predicate from a company that was no longer active, with no current registration or listing evidence and unclear marketing status. Without confirmation that the device was properly cleared and legally marketed, FDA could determine there is no valid predicate, resulting in an NSE and forcing the sponsor toward De Novo.
These scenarios are exactly the kind of issues reviewers look for when evaluating predicate suitability.
Prevention:
Pull FDA Form 3881 for each potential predicate and perform a line by line comparison of indications for use against your device.
Compare intended use at a high level, not just word choice. Make sure your device treats the same conditions, in the same population, in the same environment, for the same general purpose.
Evaluate technological characteristics explicitly. List each major design difference and document why it does not raise different questions of safety or effectiveness.
Use a Pre-Sub (Q-Submission) to get FDA feedback on your predicate rationale and comparison plan before you commit to expensive testing. FDA will not “pre-approve” your predicate, but they will signal concerns early.
If your intended use is clearly broader or different than any existing device, seriously consider a De Novo classification request instead of forcing a weak predicate fit.
Complizen’s Predicate Intelligence module automatically retrieves relevant 510(k) clearances and their indications for use so you can quickly see how similar devices are positioned, what their predicates were, and how FDA framed intended use. This helps you identify valid, legally marketed predicates with matching intended use and similar technological characteristics before you invest in testing.
5. Incomplete Performance Testing
The Problem:
For most medical devices submitted via the 510(k) pathway, the FDA expects performance data that demonstrates your device performs as well as (or comparably to) the legally marketed predicate in all relevant aspects. Without complete testing data, the FDA cannot evaluate substantial equivalence (SE) — which often triggers an Additional Information (AI) request.
Deficient performance documentation — incomplete protocols, missing raw data, partial summaries, absent statistical analysis — is among the most common reasons for AI requests.
What FDA Needs:
Your submission should include:
Complete test protocols: methodology, acceptance criteria, sample size, statistical plan.
Full test reports, including raw data or datasets (not just high-level summaries).
Testing that maps to every performance claim made in your labeling or intended use.
Evidence of compliance with FDA-recognized standards, or a clear, justified rationale for deviations.
This applies to non-clinical bench performance testing, which may cover mechanical, electrical, chemical, thermal, biocompatibility, and other safety/performance aspects.
Why Submissions Fail Here
Common failures include:
Submitting only summary tables or high-level results without raw data or detailed analysis.
Omitting statistical analysis or sufficient sample size justification.
Testing only some but not all intended claims.
Submitting “promises” to provide missing data later — which FDA rarely accepts in a 510(k).
Ignoring device-specific guidance or recognized standards.
Without full data upfront, the FDA cannot confidently conclude your device is substantially equivalent and safe — leading almost inevitably to an AI request.
Typical (Industry) Scenario
An inexperienced sponsor submitted a 510(k) for a microneedling device. They included a clinical summary showing average healing time improvement — but provided no per-patient data, no statistical analysis, and no raw safety or efficacy endpoints. Because the results were aggregated and lacked detail, FDA could not assess sample size adequacy, variance, or safety signals. Outcome: AI request to provide full datasets or new testing.
This kind of incomplete submission is a common “textbook example” of what not to do — treat it as a cautionary tale rather than an official FDA-denied case.
Prevention Checklist
Design a complete testing matrix before any test starts (bench, electrical, safety, biocompatibility, as needed).
Use the matrix in a Pre-Submission (Q-Sub) to get FDA feedback on test scope and protocol.
Conduct all tests per recognized standards, or document justification if deviating.
Include full documentation in submission: detailed protocols, raw data, statistical analyses, complete test reports — not just summaries or charts.
Ensure every performance claim made in your labeling or intended use is backed by data.
Complizen helps you align your test plans with FDA-recognized standards by cross-referencing standards used by cleared predicates. This gives you a proven baseline — what FDA has already accepted — and helps you design test protocols that match regulatory expectations before you start testing.
Cost of Failure (Delay & Expense)
If FDA issues an AI request for additional testing or data, you may see 60–180+ days of delay while you run tests, reanalyze data, and resubmit. Depending on test type and complexity, the extra cost can run into tens or even hundreds of thousands of dollars, especially for devices requiring extensive bench or clinical performance studies.
6. Missing Required Clinical Data
The Problem
Most 510(k) submissions can demonstrate substantial equivalence using bench, analytical, or non-clinical testing. However, certain devices require clinical data to show that technological differences do not introduce new risks or change real-world performance. When clinical evidence is required but not provided, FDA cannot complete its evaluation and will issue an Additional Information (AI) request or, in some cases, a Not Substantially Equivalent (NSE) decision.
Many companies mistakenly assume bench data is sufficient—even when device-specific FDA guidance clearly states that clinical validation is necessary.
When Clinical Data Is Required
FDA may require clinical evidence when:
Device-specific guidance documents call for it (common for diagnostic, imaging, implantable, or novel technologies).
Bench or analytical testing alone cannot confirm substantial equivalence.
Technological characteristics differ from the predicate in a way that raises potential clinical performance or safety questions.
Pre-Sub feedback indicates that clinical validation is needed before clearance.
These criteria are outlined in FDA’s substantial equivalence framework.
Why Submissions Fail Here
Common issues include:
Overlooking product-code-specific guidance documents that explicitly require clinical studies.
Submitting only clinical summaries without full datasets, statistical analyses, or subject-level data.
Designing studies without appropriate controls, endpoints, or sample sizes.
Relying on bench testing alone when technological differences clearly warrant clinical evidence.
FDA’s AI letters frequently request full patient-level datasets, detailed statistical plans, and unredacted reports when clinical validation is required.
Industry Example
A company submitted a diagnostic imaging device with only bench testing and a narrative clinical summary. However, FDA’s device-specific guidance required clinical validation studies demonstrating diagnostic accuracy in real-world conditions. The sponsor ultimately had to design and execute a full clinical study—delaying clearance by 12–18 months.
This example reflects a common industry pattern rather than a specific publicly documented FDA case.
Prevention
Review every device-specific guidance applicable to your product code before finalizing regulatory strategy.
Use a Pre-Submission (Q-Sub) to ask FDA directly whether clinical data will be required and what endpoints, sample sizes, and comparators they expect.
For study design discussions, request a Q-Sub meeting focused on clinical protocol feedback (the modern equivalent of the legacy “Pre-IDE” process).
Engage a biostatistician early to ensure proper sample size, power, and analytic methods.
A proactive approach can prevent costly rework and eliminate uncertainty before testing begins. Complizen automatically aggregates device-specific guidance, predicate testing requirements, and FDA-recognized standards so you can determine early whether clinical data will be required. This helps teams design evidence plans aligned with FDA expectations before investing in bench or clinical testing.
Cost of Missing Clinical Data
If FDA determines that clinical data is required after you submit:
⏱️ Delay: 12–24+ months (study design, IRB approval, execution, data analysis).
💸 Cost: Typically $200K to over $2M, depending on sample size, endpoints, monitoring requirements, and multi-site complexity.
These delays often dwarf the cost of preparing the correct evidence package from the start.
7. Non-Compliant Labeling
The Problem
Labeling deficiencies are one of the most common reasons the FDA issues Additional Information (AI) requests during 510(k) review. Errors range from missing required elements, to unsupported performance or therapeutic claims, to inconsistent indications for use, to violations of 21 CFR Part 801, which defines the core rules for medical device labeling in the United States.
When labeling is incomplete or inaccurate, FDA cannot assess whether the device can be used safely and effectively.
What FDA Needs
FDA expects labeling that:
Includes all applicable elements required under 21 CFR 801, such as device description, indications for use, contraindications (if applicable), warnings, precautions, instructions for safe use, and installation or maintenance steps when relevant.
Uses an Indications for Use statement that is identical and consistent with FDA Form 3881 across all submission materials.
Reflects warnings or precautions used by the predicate, unless there is data-based justification to remove them.
Supports every performance or clinical claim with appropriate testing or evidence.
Aligns with any device-specific FDA guidance, many of which specify mandatory labeling statements or instructional content.
Why Submissions Fail Here
Common issues include:
Marketing teams introducing unsupported performance claims that lack clinical or bench data.
Omitting warnings, precautions, or contraindications that appear in the predicate’s labeling without justification.
Instructions for use that do not describe the full procedure, setup, calibration, or maintenance steps required for safe operation.
Labeling statements that contradict or expand the Indications for Use—a red flag for FDA reviewers.
Industry Examples
Industry-reported scenarios that frequently lead to AI requests include:
A surgical stapler submission that failed to include required calibration and maintenance instructions specified in device-specific guidance.
An orthopedic device that claimed “accelerated healing” without clinical or biomechanical evidence—triggering an FDA request for supporting data.
These reflect common patterns rather than official public FDA cases.
Prevention
Retrieve and review the predicate’s labeling directly from the FDA database.
Build a comparison table mapping your labeling against your predicate’s warnings, precautions, indications, and instructions.
Ensure every claim—performance, safety, usability, therapeutic—is supported by objective data.
Confirm your Indications for Use statement matches Form 3881 exactly across all documents.
Have regulatory professionals conduct a final labeling review before submission.
A structured, evidence-backed labeling package eliminates one of the most avoidable causes of review delays.
Cost of Failure
Labeling-related AI requests often result in 30–90 days of additional work to revise text, update instructions, generate missing data, or justify removed warnings. FDA allows up to 180 days for a complete response, and delays here often have real commercial impact.
8. Software & Cybersecurity Deficiencies
The Problem
Beginning in 2023, the FDA dramatically increased scrutiny of cybersecurity in medical device submissions following the introduction of Section 524B of the FD&C Act. Under the new authority, FDA can refuse to accept or clear a device if cybersecurity documentation is incomplete.
During early enforcement, FDA reported a significant rise in cybersecurity-related deficiencies, both in quantity and severity—especially for software-enabled and connected medical devices.
What FDA Requires for “Cyber Devices”
Under Section 524B and the 2023 Cybersecurity Guidance, manufacturers of applicable cyber devices must provide documentation that includes:
Cybersecurity risk management plan informed by formal threat modeling
Software Bill of Materials (SBOM) detailing all third-party, open-source, and proprietary components
Security architecture documentation describing controls, interfaces, encryption, authentication, and data flows
Evidence of security testing, including penetration testing or equivalent validation
Postmarket monitoring and vulnerability management plan
Patch and update strategy, including coordinated vulnerability disclosure procedures
These requirements apply if the device meets FDA’s definition of a cyber device:
contains software;
is connected to the internet (directly or indirectly);
has technological characteristics that could be vulnerable to cybersecurity threats.
Common Deficiencies FDA Is Flagging
FDA has highlighted recurring issues during reviews:
Missing or incomplete threat modeling
SBOMs missing third-party libraries, dependencies, or OSS components
Lack of a comprehensive postmarket cybersecurity plan
Insufficient security testing evidence—no penetration test results, weak validation methods
Architectural documentation not detailed enough for review
These findings are now among the top reasons FDA issues RTA holds or AI requests for software-enabled devices.
Prevention
To avoid cybersecurity-related delays:
Determine early whether your product meets the “cyber device” definition under 524B
Incorporate secure-by-design principles and formal cybersecurity risk management from the earliest development stages
Perform structured threat modeling with qualified security experts
Conduct penetration testing and vulnerability scanning before submission
Prepare and document a robust postmarket cybersecurity plan, including vulnerability disclosure and patch deployment workflows
Because cybersecurity is now tightly connected to device safety, deficiencies often require design modifications—not just paperwork fixes.
Cost of Failure
Cybersecurity-related deficiencies frequently trigger substantial rework, including redesigning architecture, updating development processes, re-testing, and expanding documentation. This typically results in:
⏱️ 60–180 days of delay
💸 $30K–$150K+ in added engineering, testing, and remediation costs (varies widely based on device complexity)
9. Ignoring Device-Specific FDA Guidance
The Problem
FDA guidance documents describe the Agency’s current thinking on how specific device types should be designed, tested, and documented in premarket submissions. They are not legally binding, but in practice they function as a clear roadmap for what reviewers expect to see in your 510(k).
When your submission deviates from guidance without a clear, written rationale, you are almost guaranteed to receive an Additional Information (AI) request asking you either to follow the guidance or justify your alternative approach.
A common misconception is that “draft” guidance can be safely ignored. In reality, both draft and final guidance documents reflect FDA’s current thinking. Reviewers frequently refer to draft guidance during review, especially when no final document exists yet. Ignoring a relevant draft guidance without explanation is a high-risk strategy.
Why Submissions Fail Here
Typical failure patterns include:
Teams only discover device-specific or testing-related guidance after they have already designed and executed their test plans.
The submission’s testing strategy or endpoints do not align with FDA’s recommended approach in guidance documents.
Sponsors assume that because guidance is “nonbinding,” they can omit entire sections (e.g., usability, human factors, or labeling requirements) without justification.
The result is predictable: FDA responds with AI requests asking the sponsor either to follow the relevant guidance or to provide a robust scientific justification for doing something different.
Prevention
To avoid guidance-related surprises:
🔎 Search FDA’s guidance database at the very start of your regulatory strategy to identify all relevant documents by product code, technology, and pathway (510(k), De Novo, PMA, etc.).
📚 Review three categories of guidance:
Device-specific guidance (for your product code / device type)
Testing-related guidance (bench, biocompatibility, human factors, cybersecurity, etc.)
Pathway-specific guidance (510(k) content, De Novo, clinical evidence, etc.)
🧠 If you plan to deviate, explain your rationale clearly in a Pre-Submission (Q-Sub) and invite FDA feedback before you execute testing.
📝 In your submission, document where you followed guidance and provide explicit justification for any deviations (e.g., alternate standards, updated methods, or risk-based rationale).
Complizen automatically surfaces and flags applicable FDA guidance documents based on your device category and product code. During submission preparation, the platform highlights device-specific, testing-related, and pathway guidance so your team can design strategies and test plans that align with the Agency’s stated expectations from day one.
Cost of Ignoring Guidance
If FDA concludes that you did not follow or reasonably justify deviation from relevant guidance, you may be forced to repeat or expand testing, redo documentation, or rework labeling. In practice, this often means:
⏱️ 90–180+ days of additional delay, and
💸 Tens to hundreds of thousands of dollars ($50K–$200K+) in extra testing, consulting, and internal effort, depending on the scope of changes.
10. Inadequate Risk Management Documentation
The Problem
The FDA expects risk management documentation that aligns with ISO 14971, demonstrating that you have systematically identified hazards, evaluated associated risks, applied appropriate controls, and verified that those controls work. Risk management is not a formality; it is central to the FDA’s assessment of safety and substantial equivalence.
Submissions commonly fail when risk files are incomplete, outdated, or poorly connected to testing and labeling.
What FDA Needs to See
Your risk management file should include:
Hazard identification and risk analysis (covering normal use, foreseeable misuse, and fault conditions)
Risk evaluation against defined acceptance criteria
Documented risk control measures and evidence that the controls have been implemented
Residual risk evaluation demonstrating acceptability after controls
Full traceability: hazards → risk controls → verification/validation tests → labeling and user instructions
Traceability is essential. Without it, FDA cannot confirm that risks have been adequately mitigated.
Why Submissions Fail Here
Typical breakdowns include:
Treating risk management as a late-stage checkbox, not an ongoing part of design controls
Providing a risk table with no clear linkage to verification testing or labeling
Omitting structured failure mode analysis (e.g., FMEA) when appropriate for device complexity
Failing to integrate software risks, cybersecurity risks, or human factors into the overall risk model
Ignoring signals from predicate recalls, MDRs, or FDA safety communications, leaving known hazards unaddressed
These weaknesses almost always result in AI requests requiring expanded documentation or additional testing.
Prevention
Implement ISO 14971 risk management at the concept phase, not immediately before submission
Build a risk management matrix showing end-to-end traceability from hazard to control to verification to labeling
Include software, cybersecurity, usability, and clinical risks in a unified risk model
Review predicate recalls, MAUDE adverse events, and FDA safety communications to ensure all known hazards are addressed
Confirm that all risk controls are validated through bench testing, usability testing, or software verification, depending on the hazard type
This approach demonstrates a mature, proactive risk process—something FDA reviewers pay close attention to.
Cost of Failure
A weak or incomplete risk file often requires:
⏱️ 60–180+ days to revise analyses, implement missing controls, or perform additional verification testing
💸 Significant engineering and regulatory rework, especially when hazards require new testing or labeling modifications
Risk management issues often cascade into multiple AI cycles, making them one of the most expensive types of deficiencies to fix.
FAQs
What percentage of 510(k)s fail on the first review?
Historically, up to ~60% of new 510(k)s were placed on Refuse to Accept (RTA) hold when the RTA program began. More recent FDA data suggest around 30% now receive an RTA hold at least once.
Even after passing RTA, about two-thirds (~65%) of 510(k)s receive an Additional Information (AI) request during substantive review. Most submissions experience at least one deficiency cycle.
How much does a failed submission cost?
RTA Holds: Typically add 30–60+ days of rework. Costs vary, but many teams spend tens of thousands of dollars in internal time and consulting fees.
AI Requests: Can add 60–180 days depending on testing requirements. Costs often exceed $20K–$100K+, especially if new bench or usability testing is required.
NSE (Not Substantially Equivalent): The submission ends. You must file a new 510(k) or transition to a De Novo request, which can significantly extend timelines.
Should I hire a consultant?
For first-time submitters, an experienced regulatory consultant can significantly reduce the risk of RTA or AI delays. Typical cost ranges from $15K–$50K, depending on device complexity and submission scope—usually far less than the cost of fixing a failed strategy after filing.
Can you resubmit after an RTA hold?
Yes. You have up to 180 days to correct deficiencies and resubmit. Your updated submission undergoes another 15-day RTA screening.If you miss the 180-day window, the 510(k) is considered withdrawn and you must start a new submission.
Are Pre-Submission (Pre-Sub) meetings worth it?
Absolutely. Pre-Subs allow you to confirm:
Predicate suitability
Testing strategy and standards
Whether clinical data is needed
Whether De Novo might be more appropriate
Preparation effort varies (typically 20–40 hours), but the benefits—avoiding costly mistakes—are substantial.
How long do we have to respond to an AI request?
The FDA gives sponsors 180 calendar days to submit a complete response. The FDA review clock pauses during this period and restarts once your response is submitted. If no response is provided within 180 days, the 510(k) is considered withdrawn.
