Machine learning solutions have been successfully used to solve many simple and complex problems. However, their development process still relies on human experts to perform tasks such as data preprocessing, feature engineering and model selection. As the complexity of these tasks increases, so does the demand for automated solutions, namely Automated Machine Learning (AutoML). Most algorithms employed in these systems have hyperparameters whose configuration may directly affect their predictive performance. Therefore, hyperparameter tuning is a recurring task in AutoML systems. This thesis investigated how to efficiently automate hyperparameter tuning by means of Meta-learning. To this end, large-scale experiments were performed tuning the hyperparameters of different classification algorithms, and an enhanced experimental methodology was adopted throughout the thesis to explore and learn the hyperparameter profiles for different classification algorithms. The results also showed that in many cases the default hyperparameter settings induced models that are on par with those obtained by tuning. Hence, a new Meta-learning recommender system was proposed to identify when it is better to use default values and when to tune classification algorithms for each new dataset. The proposed system is capable of generalizing several learning processes into a single modular framework, along with the possibility of assigning different algorithms. Furthermore, a descriptive analysis of model predictions is used to identify which data characteristics affect the necessity for tuning in each one of the algorithms investigated in the thesis. Experimental results also demonstrated that the proposed recommender system reduced the time spent on optimization processes, without reducing the predictive performance of the induced models. Depending on the target algorithm, the Meta-learning recommender system can statistically outperform the baselines. The significance of these results opens a number of new avenues for future work.