A non-target analysis method for unexpected contaminants in food is described. Many current methods referred to as “non-target” are capable of detecting hundreds or even thousands of contaminants. However, they will typically still miss all other possible contaminants. Instead, a metabolomics approach might be used to obtain “true non-target” analysis. In the present work, such a method was optimized for improved detection capability at low concentrations. The method was evaluated using 19 chemically diverse model compounds spiked into milk samples to mimic unknown contamination. Other milk samples were used as reference samples. All samples were analyzed with UHPLC-TOF-MS (ultra-high-performance liquid chromatography time-of-flight mass spectrometry), using reversed-phase chromatography and electrospray ionization in positive mode. Data evaluation was performed by the software TracMass 2. No target lists of specific compounds were used to search for the contaminants. Instead, the software was used to sort out all features only occurring in the spiked sample data, i.e., the workflow resembled a metabolomics approach. Procedures for chemical identification of peaks were outside the scope of the study. Method, study design, and settings in the software were optimized to minimize manual evaluation and faulty or irrelevant hits and to maximize hit rate of the spiked compounds. A practical detection limit was established at 25 μg/kg. At this concentration, most compounds (17 out of 19) were detected as intact precursor ions, as fragments or as adducts. Only 2 irrelevant hits, probably natural compounds, were obtained. Limitations and possible practical use of the approach are discussed.Electronic supplementary materialThe online version of this article (10.1007/s00216-018-1028-4) contains supplementary material, which is available to authorized users.
Maximizing the value of each available data point in bioprocess development is essential in order to reduce the time-to-market, lower the number of expensive wet-lab experiments, and maximize process understanding. Advanced in silico methods are increasingly being investigated to accomplish these goals. Within this contribution, we propose a novel integrated process model procedure to maximize the use of development data to optimize the Stage 1 process validation work flow. We generate an integrated process model based on available data and apply two innovative Monte Carlo simulation-based parameter sensitivity analysis linearization techniques to automate two quality by design activities: determining risk assessment severity rankings and establishing preliminary control strategies for critical process parameters. These procedures are assessed in a case study for proof of concept on a candidate monoclonal antibody bioprocess after process development, but prior to process characterization. The evaluation was successful in returning results that were used to support Stage I process validation milestones and demonstrated the potential to reduce the investigated parameters by up to 24% in process characterization, while simultaneously setting up a strategy for iterative updates of risk assessments and process controls throughout the process life-cycle to ensure a robust and efficient drug supply.
Biopharmaceutical manufacturing processes can be affected by variability in cell culture media, e.g. caused by raw material impurities. Although efforts have been made in industry and academia to characterize cell culture media and raw materials with advanced analytics, the process of industrial cell culture media preparation itself has not been reported so far. Within this publication, we first compare mid‐infrared and two‐dimensional fluorescence spectroscopy with respect to their suitability as online monitoring tools during cell culture media preparation, followed by a thorough assessment of the impact of preparation parameters on media quality. Through the application of spectroscopic methods, we can show that media variability and its corresponding root cause can be detected online during the preparation process. This methodology is a powerful tool to avoid batch failure and is a valuable technology for media troubleshooting activities. Moreover, in a design of experiments approach, including additional liquid chromatography–mass spectrometry analytics, it is shown that variable preparation parameters such as temperature, power input and preparation time can have a strong impact on the physico‐chemical composition of the media. The effect on cell culture process performance and product quality in subsequent fed‐batch processes was also investigated. The presented results reveal the need for online spectroscopic methods during the preparation process and show that media variability can already be introduced by variation in media preparation parameters, with a potential impact on scale‐up to a commercial manufacturing process.
Intermediate acceptance criteria are the foundation for developing control strategies in process validation stage 1 in the pharmaceutical industry. At drug substance or product level such intermediate acceptance criteria for quality are available and referred to as specification limits. However, it often remains a challenge to define acceptance criteria for intermediate process steps. Available guidelines underpin the importance of intermediate acceptance criteria, because they are an integral part for setting up a control strategy for the manufacturing process. The guidelines recommend to base the definition of acceptance criteria on the entirety of process knowledge. Nevertheless, the guidelines remain unclear on how to derive such limits. Within this contribution we aim to present a sound data science methodology for the definition of intermediate acceptance criteria by putting the guidelines recommendations into practice (ICH Q6B, 1999). By using an integrated process model approach, we leverage manufacturing data and experimental data from small scale to derive intermediate acceptance criteria. The novelty of this approach is that the acceptance criteria are based on pre-defined out-of-specification probabilities, while also considering manufacturing variability in process parameters. In a case study we compare this methodology to a conventional +/- 3 standard deviations (3SD) approach and demonstrate that the presented methodology is superior to conventional approaches and provides a solid line of reasoning for justifying them in audits and regulatory submission.
Objective Random effects are often neglected when defining the control strategy for a biopharmaceutical process. In this article, we present a case study that highlights the importance of considering the variance introduced by random effects in the calculation of proven acceptable ranges (PAR), which form the basis of the control strategy. Methods Linear mixed models were used to model relations between process parameters and critical quality attributes in a set of unit operations that comprises a typical biopharmaceutical manufacturing process. Fitting such models yields estimates of fixed and random effect sizes as well as random and residual variance components. To form PARs, tolerance intervals specific to mixed models were applied that incorporate the random effect contribution to variance. Results We compared standardized fixed and random effect sizes for each unit operation and CQA. The results show that the investigated random effect is not only significant but in some unit operations even larger than the average fixed effect. A comparison between ordinary least squares and mixed models tolerance intervals shows that neglecting the contribution of the random effect can result in PARs that are too optimistic. Conclusions Uncontrollable effects such as week-to-week variability play a major role in process variability and can be modelled as a random effect. Following a workflow such as the one suggested in this article, random effects can be incorporated into a statistically sound control strategy, leading to lowered out of specification results and reduced patient risk.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.