Advertisement
Articles
Advertisement

Practicalities for QbD Implementation: Technology Enables Automated Batch Genealogy and Meets Process Development Requirements for Data Analysis

Wed, 01/28/2009 - 9:53am
Technology Enables Automated Batch Genealogy and Meets Process Development Requirements for Data Analysis


More and more pharmaceutical and biotech companies have been getting into the practicalities of implementing Quality by Design (QbD). In today's turbulent economic times, it is more important than ever to make a concrete and compelling business case to justify the allocation of resources to QbD. Some manufacturers have developed compelling business cases around the benefits of lowering risks in process development and speeding up the pace of process development and tech transfer into manufacturing operations (captive or remote).


(Figure 1.)
One of the difficulties with this business case is the common belief that it's the "clinical area" that's the real critical path in getting a newly identified drug to market and not process development. However, experience shows that the rate of progress through clinical trials is determined not just by clinical trial design, patient recruitment and interpretation of results, but also by the availability of "test article" (i.e., the new drug itself) for those clinical trials. This, in turn, is heavily influenced by the level of innovation and the quality of science that goes on in process development.



This is especially true of new drugs from bioprocesses. Their inherently greater variability means that the sooner the sources of process variability are understood, controlled and accounted for in the Design Space, the more certain the new drug is to be available on an ongoing basis for clinical trials, the higher the quality (and approvability) of the Chemistry Manufacturing and Controls (CMC) submission, and the faster the new drug can get onto the market. One calculation shows that this can be worth $1.4 million per day to the bottom line for each day that an average new drug is still in development rather than out on the market.

This article provides examples of two very important practical aspects of QbD implementation: 1) availability of on-demand data access that automatically accounts for batch genealogy in upstream downstream correlations and 2) the specific requirements for flexible data access in process development for Design Space work. Both examples are enabled by technology available today that can enable better realization of the business benefits of QbD sought by today's forward-thinking companies.
On-demand Data Access

Automatic accounting for batch genealogy is a requirement for process understanding, and it also exemplifies the importance of flexible on-demand data access to QbD. Lot traceability is usually the first thought that's triggered by the phrase "Batch Genealogy," but this is only one aspect of the required capability and it refers more to knowing which product lots to recall when there is a defect in an upstream material or process condition.

Another very important aspect of batch genealogy that we don't often think about is when we have an upstream process condition run within the approved range, but at multiple different points in the range. In these situations, it is often important to understand the influence of these upstream process conditions on the downstream process outcomes so that we can identify whether or not this is a source of unacceptable variability in a downstream critical quality attribute.

Looking at Figure 1, we see what happens when we have multiple batches all run within range for each process step with four different conditions applied in the first step. There is mixing in Process Step 2. In Step 3 there is no mixing, but in the final step the mixing becomes much more complex. Making a correlation between the final process outcomes and process conditions in step one is very difficult if not totally impractical using spreadsheets, and it usually results in errors. Since splitting and pooling of the process stream happens often in the way most processes are operated these days, we need an automated way to account for it using all the data gathered along the way about the cardinality and split ratios at each step so that we can easily make meaningful upstream downstream correlations.
Process Development Data Access Requirements

While QbD can be applied in manufacturing, its best to begin in process development so that quality is "built in" to the process from the start. The goal of manufacturing is to run the same process the same way so that it produces the same outcome each time for as long as the product has a profitable market. On the other hand, the goal of process development is to identify the best sequence of process operations and optimize the processing conditions and then transfer and support the resulting process to allow manufacturing to accomplish its goal. In effect then, process development must have the kind of on-demand data access flexibility that enables easy evaluation of multiple sequences of processing operations while manufacturing needs to be able to access and review the data for only one sequence of process operations all the time. Flexible, on-demand access to on-line and off-line data in multiple databases and on paper records also allows the process development and manufacturing teams to collaborate effectively to achieve QbD. In this example, we see some of the unique aspects of the resulting data access needs in process development versus manufacturing.


(Figure 2.)
Let's look at a cell culture example, because cell culture processes are usually more complex than microbial processes. In this example, we start with a frozen vial that is sub-cultured into a t-flask to expand the number of cells and bring them into an uninhibited growth phase. We might put a portion through a roller bottle stage to seed a spinner which would then seed a fermentor and go through purification steps to end up with a filtered bulk.

This is just one specific use case or process sequence in which process development is evaluating the best sequence of unit operations, optimizing the individual unit operations and optimizing the whole sequence of unit operations together. It is also a standard technique in cell culture process development to keep reseeding roller bottles so that a source of inoculum is constantly available for use as a seed at any given moment. Other sequences of operations must also be evaluated and optimized at any time to find the best set of conditions for manufacturing.


(Figure 3.)
This ultimately means that process development must be able "walk through" a matrix of unit operations in any rational sequence at any time as shown in Figure 2. We want to be able to compare different sequences of unit operations and see them in detail to determine if the process parameters are optimized. This means that we need a system that enables us to capture off-line and on-line process and quality data (even from paper records) and make it available on-demand to the people doing data analysis to compare operations and/or sequences of unit operations. As a result, a process hierarchy for on-demand data access needs to be available that accounts for the type of flexibility that allows optional processing steps and knows the sequence of operations for a particular batch – this is very different to the needs for data access and analysis in manufacturing.


(Figure 4.)
To examine in more detail how this is accomplished, we must have a batch naming convention that embodies the allowed sequence of operations and branch points. This allows the data access system to automatically trace the sequence of operations we used knowing the order and where the branch points were. Now we have fulfilled all the requirements for a flexible, hierarchical view in which we can easily select specific batch sequences and compare them using the point-and-click system shown in Figure 2. This allows us for example to see in several different ways the periods of time in which the cells were growing at an uninhibited rate using the recorded start and stop times for every process step.

With this information, we can easily prepare a traditional sawtooth graph (Figure 3) of time vs. viable cells, but it's very difficult to see if the cells are growing at an uninhibited rate throughout each time period and in each branch of the process. It is more useful to prepare a line graph where we can see that exponential growth is maintained over time in each branch of the process and this can be easily done using the data access and analysis system outlined above. We can also see the branches in parallel (Figure 4) or superimposed for easier growth rate comparisons (Figure 5).


(Figure 5.)
In summary, these examples demonstrate critically useful tools for on-demand data access and analysis in QbD implementations that help to identify and control the sources of variability during process development so that they can be better controlled to reduce risks as early as possible. With today's technology providing flexible on-demand access to the right data, QbD can begin in process development so that the best Design Space can be transferred into manufacturing from the time of process start-up rather than developed later because unacceptable variability is identified after process start-up. This approach also makes collaboration between the process development and manufacturing teams much easier – resulting in reduced tech transfer risks and shortened time and reduced costs to market, thus easily establishing a compelling business case to justify investments in enabling technologies for QbD.
Advertisement
Advertisement

Share this Story

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading