top of page

Worst‑Case Device: Definition, Use in Testing, and Impact on Re‑Validation 

What is a Worst‑Case Device? 

 

A worst‑case device (also referred to as representative sample, representative configuration, or worst‑case configuration) is the version of a device that is expected to challenge the design the most during verification and validation. It represents the configuration most likely to fail if the design were insufficient. 

Regulators require worst‑case justification to ensure that testing covers the full device family without testing every variant. 

 

Worst‑Case Device Selection During Testing 

 

Testing all models in a device family is usually impractical. Laboratories therefore select a worst‑case configuration based on design parameters that influence safety and performance. Selection may be based on the manufacturer’s device family or on a third‑party worst‑case model, depending on the test type. 

Factors commonly used to determine worst‑case: 

  • Lumens, cannulations, and internal pathways 

  • Hinges, joints, and mated surfaces 

  • Small or fragile internal components 

  • Surface roughness, coatings, and material treatments 

  • Packaging configuration and thermal mass 

 

Typical worst‑case dimensions: 

  • Largest or smallest size for mechanical stress and fatigue 

  • Highest energy output for electrical safety and EMC 

  • Longest patient contact duration for biocompatibility 

  • Highest drug concentration for combination products 

  • Most complex software configuration for software validation 

  • Most challenging sterilization load for sterility assurance 

  • Most demanding packaging configuration for transport simulation 

A clear rationale must link each selected configuration to the risk it represents. 

 

Evaluation Based on Worst‑Case Testing 

 

After testing, manufacturers typically prepare an evaluation report or summary of testing. This document explains how the worst‑case results apply to the entire product family. 

The evaluation should: 

  • Summarise test results 

  • Demonstrate equivalence across all variants 

  • Justify why the selected configuration represents the worst‑case 

  • Link worst‑case logic to risk management and design inputs 

 

This approach is similar to the sampling concept used by Notified Bodies when reviewing technical documentation in EU: a representative sample stands in for the full family. 

 

A dedicated chapter should describe the worst‑case analysis and identify design parameters that will remain relevant throughout the device’s lifecycle, including future change assessments. 

 

Impact of Design Changes 

 

Manufacturers must update testing and evaluation when changes occur in: 

  • Product design 

  • Manufacturing processes 

  • Raw materials (including packaging) 

  • Usage scenarios 

  • Sterilization or packaging methods 

  • Critical manufacturing processes 

  • Transport or storage conditions 

  • Applicable standards or regulatory requirements 

 

When Is Re‑Validation Required?  

 

Factors that typically trigger re‑validation: 

  • Changes affecting critical physical or chemical parameters 

  • Changes to critical components 

  • Changes in raw materials 

  • Modifications to intended use or use scenarios 

  • Changes in sterilization method or packaging system 

  • Significant process changes in manufacturing 

 

Re‑validation is required when existing evidence no longer adequately supports safety and performance. This may include: 

 

Delta testing 

For minor or moderate changes, delta testing can demonstrate that the modified device remains equivalent to the previously validated configuration. This approach is common in markets like the US, where demonstrating equivalence through targeted testing is often acceptable. 

Full re‑validation 

Full re‑validation is needed when: 

  • Multiple significant changes accumulate 

  • The worst‑case configuration shifts 

  • Intended use changes 

  • Sterilization or biocompatibility profiles change 

  • Standards or regulatory requirements are revised 

 

Passive triggers 

  • Country‑specific regulatory requirements 

  • Revisions of harmonised standards or guidance documents 

 

Best practice 

Critical testing should be re‑validated every 5–10 years, even without major changes, to maintain alignment with evolving standards and regulatory expectations. 

bottom of page