Driver drowsiness is a critical factor in road safety, and developing accurate models for detecting it is essential. Transfer learning has been shown to be an effective technique for driver drowsiness detection, as it enables models to leverage large, pre-existing datasets. However, the optimization of hyperparameters in transfer learning models can be challenging, as it involves a large search space. The core purpose of this research is on presenting an approach to hyperparameter tuning in transfer learning for driving fatigue detection based on Bayesian optimization and Random search algorithms. We examine the efficiency of our approach on a publicly available dataset using transfer learning models with the MobileNetV2, Xception, and VGG19 architectures. We explore the impact of hyperparameters such as dropout rate, activation function, the number of units (the number of dense nodes), optimizer, and learning rate on the transfer learning models' overall performance. Our experiments show that our approach improves the performance of the transfer learning models, obtaining cutting-edge results on the dataset for all three architectures. We also compare the efficiency of Bayesian optimization and Random search algorithms in terms of their ability to find optimal hyperparameters and indicate that Bayesian optimization is more efficient in finding optimal hyperparameters than Random search. The results of our study provide insights into the importance of hyperparameter tuning for transfer learning-based driver drowsiness detection using different transfer learning models and can guide the selection of hyperparameters and models for future studies in this field. Our proposed approach can be applied to other transfer learning tasks, making it a valuable contribution to the field of ML.