two-level fractional-factorial designs including Plackett-Burman designs (Box,
etal 2005: Montgomery 2013).
A test method is said to be rugged if
none of the variables studied have a sig-
nificant effect. When significant effects
are found a common fix is to rewrite the
SOP to restrict the variation in the variables
to a range over which the variable will not
have a great effect on the performance of the
Process Variation Studies. Sometimes when
the process variation is perceived to be too
high it is not uncommon to think that the mea-
surement is the root cause. Sometimes this is the
case but often it is not. In such situations there
are typically three source of variation that may
contribution to the problem: the manufacturing process and
the sampling process as well as the test method (Snee 1983).
In two instances that I’m aware of the sampling method
was the issue. In one case the variation was too high because
the sampling procedure was not followed. When the correct
method was used the sampling variance dropped by 30%. In
another case each batch was sampled 3 times. When the process variance study was run sampling contributed only 6% of
the total variance. The Standard Operating Procedures were
changed immediately to reduce the samples to 2 per batch;
thereby cutting sampling and testing costs by one-third. A
study was also initiated to see if one sample per batch would
Method Continued Verification. The FDA Process
Validation Guidance calls for Continued Process Verification
which includes the test methods. An effective way to assess
the long-term stability of a test method is to periodically
submit “blind control” samples (also referred to as reference
samples) from a common source for analysis along with
routine production samples in a way that the analyst cannot
determine the difference between the production samples
and the control samples. Nunnally and McConnell (2007)
conclude “…there is no better way to understand the true
variability of the analytical method”.
The control samples are typically tested 2-3 times (
depending on the test method) at a given point in time. The
sample averages are plotted on a control chart to evaluate
the stability (reproducibility) of the method. The standard
deviations of the repeat tests done on the samples are plotted on a control chart to assess the stability of the repeatability of the test method.
Another useful analysis to perform is to do an analysis
of variance of the control sample data and compute the %
long-term variation which measures the stability of the test
method over time. Long term variation variance components
< 30% are generally considered good with larger values suggesting the method may be having reproducibility issues
(Snee and Hoerl 2012).
It is concluded that using QbD concepts, methods and tools
improves test method performance and reduces the risk of
poor manufacturing process performance and defective pharmaceuticals reaching patients. Risk is reduced as the accuracy,
repeatability and reproducibility increases. Reduced variation
is a critical characteristic of good data quality as reduced variation results in reduced risk.
Screening experiments followed by optimization studies is
an effective way to design effective test methods. Measurement
processes can be controlled using control samples and control charts and analysis of variance techniques. Measurement
quality can be improved using Gage Repeatability and
Reproducibility studies. Robust measurement systems can be
created using statistical design of experiments. Product variation studies that separate sampling and process variation from
test method variation is an effective way to determine the root
cause of process variation problems. ■
Borman, P., M., etal (2007) “Application of Quality by Design
to Analytical Methods”, Pharmaceutical Technology,
October 2007, 142-152.
Box, G. E. P. , J. S. Hunter and W. G. Hunter (2005), Statistics
for Experimenters, 2nd Edition, John Wiley and Sons, New
York, NY, 345-353
Montgomery, D. C. (2013), Design and Analysis of
Experiments, 8th Edition, John Wiley and Sons, New York,
NY, Chapter 13.
Nunnally, B. K. and J. S. McConnell (2007) Six Sigma in the
Pharmaceutical Industry: Understanding, Reducing, and
Controlling Variation in Pharmaceuticals and Biologics,
CRC Press, Boca Raton, FL
Schweitzer, M., etal (2010) “Implications and Opportunities
of Applying QbD Principles to Analytical Measurements”,
Pharma Tech, Feb 2010, 52-59.
Snee, R. D. (1983), “Graphical Analysis of Process Variation
Studies”, J. Quality Technology, 15, 76-88.
Snee, R. D. (2005) “Are We Making Decisions in a Fog? The
Measurement Process Must Be Continually Measured,
Monitored and Improved”, Quality Progress, December
Snee, R. D. (2010) “Crucial Considerations in Monitoring
Process Performance and Product Quality”,
Pharmaceutical Technology, October 2010, 38-40.
Snee, R. D. and R. W. Hoerl (2012) “Going on Feel: Monitor
and Improve Process Stability to Make Customers Happy”,
Quality Progress, May 2012, 39-41
Youden, J. (1961) “Systematic errors in physical constants”.
Physics Today, 14, No. 9, 32-42.
ABOUT THE AUTHOR:
Ronald D. Snee, PhD is founder and president of Snee
Associates, a firm dedicated to the successful implementation
of process and organizational improvement initiatives. He
can be reached at Ron@SneeAssociates.com
PHARMACEUTICAL PROCESSING | MAY 2014 29 ■
■QUALITY BY DESIGN