Enzymes offer a more environmentally friendly and low-impact solution to conventional chemistry, but they often require additional engineering for industrial settings, an endeavor that is challenging and laborious. To address this issue, the power of machine learning can be harnessed to produce predictive models that facilitate in silico study and engineering of novel enzymatic properties. However, the conversion from the biological domain to the computational realm requires special attention to ensure the training of accurate and precise models. In this review, we examine the critical step of encoding protein information to numeric representations for use in machine learning. We selected the most important approaches for encoding the three distinct biological protein representations — primary sequence, 3D structure, and dynamics — to explore their requirements for employment and inherent biases. Combined representations of proteins and substrates are also introduced as emergent tools in biocatalysis. We propose the division of fixed representations, a collection of rule-based encoding strategies, and learned representations extracted from the latent spaces of large neural networks. To select the most suitable protein representation, we propose two main factors governing this choice. The first one is the model setup, being influenced by the size of the training dataset and the choice of architecture. The second factor is the model objectives, concerning the assayed property, the difference between wild-type models and mutant predictors, and requirements for explainability. This review is aimed at serving as a source of information and guidance for properly representing enzymes in future machine learning models for biocatalysis.