Practicalities for QbD Implementation: Technology Enables Automated Batch Genealogy and Meets Process Development Requirements for Data Analysis
Wed, 01/28/2009 - 9:53am
Technology Enables Automated Batch Genealogy and Meets Process Development Requirements for Data Analysis
More and more pharmaceutical and biotech companies have been getting into the practicalities of implementing Quality by Design (QbD). In today's turbulent economic times, it is more important than ever to make a concrete and compelling business case to justify the allocation of resources to QbD. Some manufacturers have developed compelling business cases around the benefits of lowering risks in process development and speeding up the pace of process development and tech transfer into manufacturing operations (captive or remote).
This is especially true of new drugs from bioprocesses. Their inherently greater variability means that the sooner the sources of process variability are understood, controlled and accounted for in the Design Space, the more certain the new drug is to be available on an ongoing basis for clinical trials, the higher the quality (and approvability) of the Chemistry Manufacturing and Controls (CMC) submission, and the faster the new drug can get onto the market. One calculation shows that this can be worth $1.4 million per day to the bottom line for each day that an average new drug is still in development rather than out on the market.
This article provides examples of two very important practical aspects of QbD implementation: 1) availability of on-demand data access that automatically accounts for batch genealogy in upstream downstream correlations and 2) the specific requirements for flexible data access in process development for Design Space work. Both examples are enabled by technology available today that can enable better realization of the business benefits of QbD sought by today's forward-thinking companies.
On-demand Data Access
Automatic accounting for batch genealogy is a requirement for process understanding, and it also exemplifies the importance of flexible on-demand data access to QbD. Lot traceability is usually the first thought that's triggered by the phrase "Batch Genealogy," but this is only one aspect of the required capability and it refers more to knowing which product lots to recall when there is a defect in an upstream material or process condition.
Another very important aspect of batch genealogy that we don't often think about is when we have an upstream process condition run within the approved range, but at multiple different points in the range. In these situations, it is often important to understand the influence of these upstream process conditions on the downstream process outcomes so that we can identify whether or not this is a source of unacceptable variability in a downstream critical quality attribute.
Looking at Figure 1, we see what happens when we have multiple batches all run within range for each process step with four different conditions applied in the first step. There is mixing in Process Step 2. In Step 3 there is no mixing, but in the final step the mixing becomes much more complex. Making a correlation between the final process outcomes and process conditions in step one is very difficult if not totally impractical using spreadsheets, and it usually results in errors. Since splitting and pooling of the process stream happens often in the way most processes are operated these days, we need an automated way to account for it using all the data gathered along the way about the cardinality and split ratios at each step so that we can easily make meaningful upstream downstream correlations.
Process Development Data Access Requirements
While QbD can be applied in manufacturing, its best to begin in process development so that quality is "built in" to the process from the start. The goal of manufacturing is to run the same process the same way so that it produces the same outcome each time for as long as the product has a profitable market. On the other hand, the goal of process development is to identify the best sequence of process operations and optimize the processing conditions and then transfer and support the resulting process to allow manufacturing to accomplish its goal. In effect then, process development must have the kind of on-demand data access flexibility that enables easy evaluation of multiple sequences of processing operations while manufacturing needs to be able to access and review the data for only one sequence of process operations all the time. Flexible, on-demand access to on-line and off-line data in multiple databases and on paper records also allows the process development and manufacturing teams to collaborate effectively to achieve QbD. In this example, we see some of the unique aspects of the resulting data access needs in process development versus manufacturing.
This is just one specific use case or process sequence in which process development is evaluating the best sequence of unit operations, optimizing the individual unit operations and optimizing the whole sequence of unit operations together. It is also a standard technique in cell culture process development to keep reseeding roller bottles so that a source of inoculum is constantly available for use as a seed at any given moment. Other sequences of operations must also be evaluated and optimized at any time to find the best set of conditions for manufacturing.
With this information, we can easily prepare a traditional sawtooth graph (Figure 3) of time vs. viable cells, but it's very difficult to see if the cells are growing at an uninhibited rate throughout each time period and in each branch of the process. It is more useful to prepare a line graph where we can see that exponential growth is maintained over time in each branch of the process and this can be easily done using the data access and analysis system outlined above. We can also see the branches in parallel (Figure 4) or superimposed for easier growth rate comparisons (Figure 5).