Chromatographic processes can be optimized in various ways. However, the two most prominent approaches are either based on statistical data analysis or on experimentally validated simulation models. Both approaches heavily rely on experimental data, the generation of which usually imposes a significant bottleneck on rational process design. Hence, here a closed-loop optimization strategy is proposed in that an automated high throughput liquid handling platform is combined with a genetic algorithm. This setup enables process optimization on the mini-scale and thus saves time as well as material costs. The practicability and robustness of the proposed high throughput method is demonstrated with two exemplary optimization tasks: first, optimization of the buffer composition in the capture step for a binary protein mixture (lysozyme and cytochrome), and second, optimization of multilinear gradient elution for the separation of a ternary mixture (ribonuclease and cytochrome, and lysozyme).
IntroductionChromatography is widely used as a separation technique in the biotechnological industry. High selectivity and gentle conditions have made it an essential step in current purification processes for biological macromolecules, for instance, proteins. However, due to complex and dynamic interactions between protein molecules and adsorbent materials, the design of optimal separation processes is very difficult and time consuming. Heuristic design methods that are based on previous experiences with similar separation problems require a great amount of expert knowledge and usually do not lead to the global process optimum. Furthermore, process optimization is often restricted by time-to-market requirements and must, hence, be performed as fast as possible.Most methods for process optimization that are found in the literature today divide into two classes: model-based optimization and direct process optimization. In model-based optimization, mathematical models are utilized to mimic the studied processes. Optimization is performed in-silico and thus has the clear advantage of not restricting the optimization by lab schedules. Limiting factors are only computational effort and reliability or validity of the applied simulation models. The development of mechanistic models requires good process understanding, initial experiments for parameter estimation, and independent experiments for model validation. The latter is also true for black box models (for example [1,2]). The determination of mechanistic model parameters, such as effective mass transfer coefficients and isotherm coefficients, is generally very complex and requires large amounts of material and time, especially when the interactions of realistic multi-component mixtures are considered without significant model simplifications.An alternative to the model-based approach is to directly identify process optima based on the results of experiments that are iteratively planned by an optimization algorithm, such as repeated design of experiments (DoE) or an evolutionary strat...
Chromatographic processes can be optimized in various ways, and the two most prominent approaches are based either on statistical data analysis or on experimentally validated simulation models. Both strategies rely heavily on experimental data, the generation of which usually imposes a significant bottleneck on rational process design. The latter approach is followed in this work, and the utilizability of high throughput compatible experiments for the determination of model parameters which are required for in silico process optimization, is assessed. The unknown parameter values are estimated from batch uptake experiments on a robotic platform and from dynamic breakthrough experiments with miniaturized chromatographic columns. The identified model is then validated with respect to process optimization by comparison of model predictions with experimental data from a preparative scale column. In this study, a strong cation exchanger Toyopearl SP-650M and lysozyme solved in phosphate buffer (pH 7), is used as the test system. The utilization of data from miniaturized and high throughput compatible experiments is shown to yield sufficiently accurate results, and minimizes efforts and costs for both parameter estimation and model validation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.