Ir al contenido

Common Pitfalls in SEND Submissions and How to Avoid Them

While the U.S. FDA continues to mandate SEND for nonclinical study data, other regulatory agencies are also taking notice. The European Medicines Agency (EMA) has launched a proof-of-concept pilot (January 2024) to evaluate SEND as a standardized format for receiving individual nonclinical data. The goal? To enhance data review efficiency and improve regulatory decision-makingIf your team needs guidance on implementing SEND standards efficiently, check out our CRO SEND services to streamline your submissions.

As SEND adoption expands globally, the expectations for quality, consistency, and traceability are only getting higher. Yet even experienced teams still encounter challenges when converting study data into SEND format,issues that can delay submissions, trigger questions from reviewers, or even lead to rejection.

The good news? Most SEND issues are predictable and preventable.

In this post, we’ll explore the most common pitfalls in SEND submissions and practical ways to avoid them.

1. Treating SEND as a Formatting Task

One of the biggest misconceptions is that SEND is “just a data conversion.”

In reality, SEND is about data structure, traceability, and scientific meaning — not simply generating XPT files.

When SEND is left until the end of the study, teams often face:

·         Mismatched trial design domains (TE, TA, TV, TS)

·         Incomplete or inconsistent subject identifiers

·         Variables that don’t map correctly to the protocol

·         Reviewer’s Guides that lack context or explanation

Avoid it:

Plan for SEND during study design. Align sponsors, CROs, and SEND specialists early on study design, data sources, and terminology. For additional guidance, see our SEND Planning Sponsor-CRO Guide.

2. Missing or Inconsistent Controlled Terminology

CDISC Controlled Terminology (CT) ensures uniform interpretation across datasets. But inconsistencies are common:

·         Mixing CT versions across domains

·         Using outdated or internal terms

·         Applying inconsistent CT values between contributors

These issues can lead to validation errors or confusion during regulatory review.

Avoid it:

Before data collection begins, agree on which SENDIG and CT version will be used. Document these in a SEND data specification and ensure every contributor follows the same reference.

3. Incomplete or Misaligned Metadata

The define.xml is the backbone of the SEND package — it tells reviewers what’s in your data and how to interpret it.

Common issues include:

·         Variables missing or mislabelled in the define file

·         Value-level metadata not clearly defined

·         Links between datasets and define.xml not matching

Avoid it:

Cross-check the define.xml with both your datasets and nSDRG. Ensure all variables are defined, traceable, and consistent across domains.

4. Gaps between SEND and the Study Report

Even if your SEND datasets pass validation, inconsistencies with the nonclinical study report (CSR) can cause regulatory pushback.

Frequent problems include:

·         Subject counts not matching the CSR

·         Missing unscheduled or unplanned events

·         Results summarized differently in SEND and the report

Avoid it:

Conduct a SEND-to-Report consistency review before submission. SEND should faithfully represent the study results, not just pass a validation check.

5. Lack of Clarity in the Reviewer’s Guide (nSDRG)

The Nonclinical Study Data Reviewer’s Guide (nSDRG) is your narrative bridge between the study and the data. Yet, too many submissions include generic or incomplete guides.

Avoid it:

Use the PHUSE nSDRG template as a foundation, but make it study-specific. Clearly document the data derivations, study nuances, and any deviations from standard SEND conventions.

For additional insights on SEND strategy and best practices, you may also explore this blog on mastering CDISC SEND submissions.

6. Version Mismatch across Contributors

In multi-partner setups, version control can easily break down. Mismatched SENDIG, CT, or Define-XML versions across contributors can lead to structural and validation errors.

Avoid it:

Before data work begins, confirm alignment on:

·         SENDIG version (e.g., 3.1 or 3.1.1)

·         Controlled Terminology version

·         Define-XML version

·         Variable naming conventions (USUBJID, STUDYID, etc.)


7. Overlooking Final Validation

Validation is not just about getting a “PASS.” It’s about ensuring the data are technically correct and scientifically accurate.

Common pitfalls include over-reliance on a single validation tool or ignoring warnings that could impact interpretability.

Avoid it:

Use multiple validation tools , including FDA Validator rule sets, and always combine technical validation with expert review.

The Bottom Line

SEND success depends less on tools and more on process. The most common pitfalls aren’t coding errors; they’re coordination issues.

When teams plan early, align on standards, and perform meaningful quality checks, SEND becomes more than a regulatory requirement  it becomes a trusted part of the scientific story.

Have a SEND challenge or quality question? We’d be happy to help you solve it.

FAQs on Common SEND Submission Pitfalls


1. What is SEND and why is it more than just formatting?

SEND (Standard for Exchange of Nonclinical Data) structures nonclinical study data for regulatory submissions. It’s not just about generating XPT files; it ensures traceability, scientific meaning, and consistency across datasets. Learn more about planning for SEND in this SEND Planning Sponsor-CRO Guide.

2. How can inconsistent controlled terminology affect SEND submissions?

Using outdated, internal, or mismatched CDISC Controlled Terminology can trigger validation errors and reviewer confusion. Always agree on the SENDIG and CT version before starting your data collection.

3. What are the risks of incomplete metadata in define.xml?

Define.xml links datasets and explains their structure. Missing or mislabelled variables, unclear value-level metadata, or broken links can lead to regulatory pushback. Cross-check all datasets with define.xml and nSDRG for consistency.

4. Why should SEND datasets align with the study report?

Even if datasets pass validation, inconsistencies with the study report (CSR) can cause delays or questions from reviewers. Conduct a SEND-to-Report consistency review to ensure accurate representation of results.

5. How important is a clear Reviewer’s Guide (nSDRG)?

A generic or incomplete nSDRG can confuse reviewers. Use the PHUSE nSDRG template and make it study-specific, documenting derivations, nuances, and deviations from standard SEND practices.

6. What problems arise from version mismatches across contributors?

Mismatched SENDIG, CT, or define.xml versions can create structural errors. Align all partners on versions, naming conventions, and standards before starting data work.

7. Why is final validation critical in SEND submissions?

Over-reliance on one validation tool or ignoring warnings can compromise accuracy. Use multiple tools including FDA Validator rule sets, and combine technical validation with expert review.

8. How can early planning prevent SEND issues?

Starting SEND planning at the study design stage ensures alignment between sponsor, CRO, and SEND specialists, reducing the risk of mismatches, inconsistencies, and regulatory delays.

en SEND