Modern machine learning methods, and their use alongside established
paradigms such as Quality by Design, have the potential to fundamentally
change the way bioprocesses are developed. In particular, horizontal
knowledge transfer methods, which seek to exploit data from historical
processes to facilitate process development for a new product, provide
an opportunity to rethink process development workflows. In this work,
we firstly assess the potential of two knowledge transfer approaches,
meta learning and one-hot encoding, in combination with Gaussian process
(GP) models. We compare their performance to GPs developed only on data
of the new process. Using simulated mammalian cell cultivation data, we
observe that both knowledge transfer approaches outperform the
individual-product approach. In the second part, we address the question
whether experiments for a new product could be designed more effectively
by exploiting existing knowledge. In particular, we suggest to
specifically design few runs for the novel product to calibrate
knowledge transfer models, a task that we coin calibration design. We
propose a novel, customised metric to identify a set of calibration
design runs, which exploits differences in process evolutions of
historical products. In two simulated case studies, we observed that
training with calibration designs yields similar test set errors
compared to common approaches of Design of Experiments. However, much
fewer experiments are needed for the former, suggesting an interesting
alternative for future bioprocess development. Overall, the results
suggest that process development could be significantly streamlined when
systematically carrying knowledge from one product to the next.