Summary: Verification and validation are two of the most frequently confused terms in diagnostic assay development, and two of the most consequential. Using them interchangeably, or running them in the wrong sequence creates documentation gaps that surface during regulatory review, manufacturing transfer, or field deployment. This guide explains what each means, why the distinction matters specifically for lateral flow programs, and what a technically sound approach looks like in practice.
Table of Contents
The Core Distinction in Assay Development
In the context of lateral flow immunoassay development, verification vs. validation comes down to two different questions — each aimed at a different phase of the design process.
Verification asks: Did we build the assay right? It confirms that the assay meets the technical specifications defined during concept design, including the detection limit, precision, specificity, and other measurable performance parameters documented in the design inputs.
Validation asks: Did we build the right assay? It confirms that the assay is fit for its intended use. That means it performs reliably for the intended user, in the intended environment, with the intended sample type.
Both are required. Neither replaces the other. A lateral flow test that meets its analytical specifications in the lab (verification) can still fail in a clinical setting if it was never confirmed to work with real patient samples under realistic use conditions (validation). The reverse is also possible: an assay that performs well in a clinical pilot may have never been documented against the original design specifications, creating a gap in the design history file that matters later.
Where Verification Fits in the LFA Development Process
Verification ensures that the design meets the specification before the program advances. Once the assay design has been locked across reagent concentrations, buffer formulations, membrane and pad selections, conjugate preparation, and strip architecture, the verification stage confirms that the design performs to its stated specifications under controlled conditions.
Typical verification studies for a lateral flow assay include:
- Limit of detection (LoD) determination, often following CLSI EP17 or an equivalent framework, to confirm that the assay is designed to detect the analyte reliably at or below the target concentration
- Precision studies assessing repeatability (same operator, same run) and reproducibility across a range of operators, days, and lots
- Analytical specificity, including cross-reactivity panels that challenge the assay with structurally similar analytes or potentially interfering substances
- Linearity or dose-response characterization for assays with quantitative or semi-quantitative output
- Interference testing for endogenous sample components (hemoglobin, lipids, bilirubin) and common exogenous substances likely to be present in the intended population — an important check on the sensitivity of the test across real-world sample variation
The output of verification is documented evidence, typically a verification report or a results protocol, confirming that the assay performs as designed. This documentation is a required input for the design history file (DHF) and must be in place before the assay proceeds to manufacture.
A critical point for lateral flow programs is that verification is performed under controlled laboratory conditions. That means characterized reagents, controlled temperature and humidity, trained analysts following written protocols, and well-defined sample preparations. If verification is run informally, without a protocol, without defined acceptance criteria, or with contrived samples that do not represent the real matrix, the results will not support regulatory submissions and may not hold up during manufacturing.
Where Validation Fits in the LFA Development Process
Validation comes after verification. Its purpose is to confirm that the assay is fit for use by the intended users, with the intended sample type, in the intended environment. Design validation addresses user needs and use conditions that controlled lab testing cannot fully replicate — it is not enough to show the assay is analytically sound under ideal conditions.
For lateral flow assays, this distinction has practical weight. A point-of-care lateral flow device will be used by individuals who are not laboratory scientists in uncontrolled settings. The validation program has to reflect that reality, including the complexity of real-world scenarios that sit outside the boundaries of bench testing. Common validation components include:
- Clinical performance studies using real patient specimens, not contrived samples, to establish sensitivity and specificity in the intended population
- Usability and human factors evaluation, which assesses whether intended users can operate the test correctly and interpret the result without systematic error — a direct measure of whether the lateral flow test meets user requirements
- Environmental robustness testing, including exposure to temperature extremes, humidity variation, and accelerated aging, to confirm the assay performs across the range of conditions it will encounter in the field
- Matrix equivalence, particularly when the assay has been developed using one sample type and the intended population introduces variability in sample quality, collection method, or constituent profile
Validation scope is shaped by the intended regulatory pathway.
A test seeking a CLIA waiver must demonstrate that lay users can perform it correctly and that use errors do not produce clinically significant misclassification.
A test subject to FDA 510(k) clearance requires documented analytical and clinical performance data meeting predicate standards. EU IVDR performance evaluation requires evidence across three categories: scientific validity, analytical performance, and clinical performance.
The regulatory pathway should be identified during concept design, not at the validation stage. The choice between a CLIA-waived point-of-care test and a laboratory-use-only test, or between a 510(k) and a De Novo submission, affects study design, required documentation, and the evidence standard. By the time a team reaches validation, those decisions should already be embedded in the development plan.
Why the Sequence Matters
Running validation before verification is a common mistake in early-stage programs. It usually happens because a team has clinical samples available and wants to generate performance data quickly. The problem is that validating an assay that has not been verified means the specifications being tested against are not yet established.
If the clinical results look acceptable, there is no documented baseline to anchor them to. If they do not, the team cannot distinguish between an assay design problem and a use-environment problem — and investigating that question without a verified baseline is costly.
The correct sequence is: design input definition → feasibility studies → optimization → design freeze → verification → validation → design transfer. Each phase produces documented evidence that the next phase depends on.
Conflating verification and validation is another common error. Some programs treat them as a single activity, running analytical performance studies on clinical samples and labeling the whole package as “validation.” The result is an evidence package that satisfies neither standard fully. Verification requires controlled conditions and defined acceptance criteria. Validation requires realistic use conditions and a representative study population. They can sometimes be coordinated efficiently, but they cannot be merged without sacrificing the rigor each requires.
Documentation Requirements
Under ISO 13485, both verification and validation require written protocols approved before the studies begin, documented results, and a formal review and sign-off comparing results to pre-specified acceptance criteria. The protocol defines what success looks like before any data is collected. The results document what actually happened. The review confirms whether the evidence is sufficient to proceed — it is the formal confirmation that the design meets its stated aims before the program advances.
This documentation structure is not a compliance formality. It is what makes the evidence package auditable and transferable, and it is the backbone of quality control across the development program. A development program that generates strong assay performance data without this structure will face significant remediation work before it can support a regulatory submission or a manufacturing transfer to a partner facility.
FDA’s Quality Management System Regulation (QMSR), which aligns U.S. device quality requirements with ISO 13485:2016 and is effective February 2026, makes this explicit. Verification and validation records are inputs to the design history file, which is a required element of the device master record. A gap in verification or validation within the DHF creates problems during design transfer and in downstream regulatory interactions.
The practical implication for early-stage programs: if a CDMO partner cannot show you what a verification protocol looks like and how verification results feed into the DHF, that is a signal worth taking seriously.
What This Means When Selecting a Development Partner
When evaluating a CDMO for lateral flow assay development, the verification and validation stage is where process discipline is most visible. The questions below surface the difference between a partner that generates data and one that generates a defensible evidence package.
Ask specific questions:
- Do they write verification protocols before the study begins, with pre-defined acceptance criteria, or do they generate data and review it retrospectively?
- Can they show examples of verification reports that have supported regulatory submissions or manufacturing transfer packages?
- How do they handle a verification result that fails to meet the acceptance criteria? What is the investigation and re-verification process?
- For validation: have they run usability studies before? Have they worked with the regulatory pathway your product requires?
- Is verification and validation performed under the same quality management system as development and manufacturing, or does the evidence package need to be transferred across organizations?
When development and manufacturing are separated, the verification and validation data package must transfer cleanly to the manufacturing site. Documentation gaps, undocumented process variables, or evidence generated outside a compatible quality system can make the transfer expensive or require that studies be repeated.
A partner that operates development, verification, validation, and manufacturing under a single ISO 13485-aligned system removes that risk at the source.
FAQ
Can verification and validation be run concurrently?
Not for the core sequence. Verification requires a locked design. If the assay is still being adjusted, data collected before that lock cannot be used to verify the final design. Validation follows because it assumes the design being tested is final. That said, some studies can be structured to contribute evidence to both phases when the protocol is designed with acceptance criteria for both stages in mind. Precision and interference work are common examples. The conceptual sequence is fixed; the scheduling can be managed efficiently by a team that plans both stages together from the start.
What is the difference between analytical validation and clinical validation?
Analytical validation characterizes the test’s performance against external reference standards or comparator methods, establishing sensitivity, specificity, precision, and interference profiles under conditions representative of the intended use. Clinical validation confirms performance in the intended patient population using real specimens, with diagnostic accuracy assessed against a clinical reference standard. Both are distinct from verification, which evaluates performance against the internal design specifications established during development rather than against an external comparator.
Does a CLIA waiver require clinical validation?
CLIA waiver requires evidence that the test is simple to use and poses no unreasonable risk of erroneous results in the hands of a lay user. That means usability evaluation is a central component, and analytical performance must be demonstrated under conditions representative of intended use. The specific evidence package depends on the FDA regulatory pathway through which the test is submitted.
How long do verification and validation stages take in a typical LFA program?
Together, these two stages typically add four to ten months to a lateral flow development timeline, depending on the regulatory pathway, the number of study sites, and whether clinical specimens are readily available. Programs with well-defined performance targets and access to banked clinical samples tend to move faster. Programs where the intended use or regulatory pathway is still being defined at the verification stage will take longer — a limitation worth accounting for early in planning.
What happens if verification fails?
A failing verification result triggers a documented investigation. The team identifies the root cause — whether in the assay design, the reagent lot, the protocol execution, or the acceptance criterion itself — and determines the appropriate corrective action. If the assay design is modified to address the failure, affected verification studies are repeated. The investigation and resolution are documented in the design history file. Skipping the investigation or proceeding without documented re-verification is a quality system non-conformance.
About Palladium Diagnostics
Palladium Diagnostics is a CDMO specializing in lateral flow assay development and contract manufacturing for the in vitro diagnostics industry. Verification and validation are performed under an ISO 13485-aligned quality management system, with documentation designed to support design history files, regulatory submissions, and manufacturing transfer. If you are evaluating a development program or reviewing an existing project, the right starting point is a 30-minute feasibility conversation.
For background on where verification and validation fit within the full development process, see Lateral Flow Assay Development: Process, Stages, and What to Expect.



