The number of unannotated or orphan enzymes vastly outnumber those for which the chemical structure of the substrates are known. While a number of enzyme function prediction algorithms exist, these often predict Enzyme Commission (EC) numbers or enzyme family, which limits their ability to generate experimentally testable hypotheses. Here, we harness protein language models, cheminformatics, and machine learning classification techniques to accelerate the annotation of orphan enzymes by predicting their substrate's chemical structural class. We use the orphan enzymes of Mycobacterium tuberculosis as a case study, focusing on two protein families that are highly abundant in its proteome: the short-chain dehydrogenase/reductases (SDRs) and the S-adenosylmethionine (SAM)-dependent methyltransferases. Training machine learning classification models that take as input the protein sequence embeddings obtained from a pre-trained, self-supervised protein language model results in excellent accuracy for a wide variety of prediction tasks. These include redox cofactor preference for SDRs; small-molecule vs. polymer (i.e. protein, DNA or RNA) substrate preference for SAM-dependent methyltransferases; as well as more detailed chemical structural predictions for the preferred substrates of both enzyme families. We then use these trained classifiers to generate predictions for the full set of unannotated SDRs and SAM-methyltransferases in the proteomes of M. tuberculosis and other mycobacteria, generating a set of biochemically testable hypotheses. Our approach can be extended and generalized to other enzyme families and organisms, and we envision it will help accelerate the annotation of a large number of orphan enzymes.