Method Development and Validation - Are We Doing It Right?
April 21, 2015
Last year, a group of academic chromatographers in Spain and France published a broad survey of the way analysts are doing method validation1. The protocols have been established since the 1970s, with the most recent guidelines codified in the early ’90s. The authors (M.J. Ruiz-Angel et al) note:
“Method validation… implies not only the definition and evaluation of the classical validation or performance parameters (or characteristics): accuracy/recovery, precision (repeatability, intermediate precision and reproducibility), linearity and application range, limit of detection (LOD)/limit of quantitation (LOQ), selectivity/specificity, robustness, ruggedness, uncertainty, trueness, stability and system suitability studies, but also a detailed and extensive protocol on how to operate and transfer analytical methods and the involved procedures“.
(Quoted from the paper, italics Jeff Kiplinger, PhD.)
It’s a good paper, well written and well worth reading. It gives us a picture of how things have changed over the years and suggests that we may need to take a little more care if we want our analytical methods to be truly useful. The authors point out that although some method performance parameters such as precision and linearity are almost always part of validation, robustness testing is done less than 20% of the time, and ruggedness and system suitability less than 10% of the time. These are the parameters that are most important if you want your method to be transferable – to different labs, analysts, or instrument systems.
Why don’t we evaluate ruggedness, robustness, and system suitability? We’re all working in a distributed R&D environment today, where passing methods back and forth among CROs and sponsors is a daily task. What’s the cost (dollar and time lost) if a validated method has to be re-developed (and re-validated?) every time it’s transferred? Pretty high.The failure to test these key parameters is actually a failure at the method development stage, not at the validation stage. The lab is stopping short of determining the effect of external variables like the analyst and the instrument system (ruggedness), the internal variables like temperature and buffer and solvent (robustness), and the system’s “working order” (system suitability) as part of the final method development and qualification.
Validation is a final step, one that verifies and documents that the method is suitable for its purpose. Has the method development and validation procedure changed over the last 40 years? Well, not in principle. Yes, to some extent due to instrumentation changes. But most fundamentally, the procedure has changed because we’re leaving parts out, doing the minimum necessary work to get to the near-term goal. Maybe it’s cost cutting; maybe we’re not training analysts properly. Maybe program managers and project teams change too often in modern R&D, and legacy methods and processes are too sticky to re-evaluate.Whatever the reasons, with the continuing externalization of R&D efforts it’s a good time to re-evaluate the way we think about method validation. A working HPLC method is the foundation of good decision making during program development.1 M.J. Ruiz-Angel, M.C. García-Alvarez-Coque, A. Berthod, S. Carda-Broch; “Are analysts doing method validation in liquid chromatography”?; J. Chromatogr. A, 1353 (2014) 2–9