Many biomedical ontologies have now been developed, stimulated by the increasing importance of biomedical ontologies in the scientific community. Most ontology development efforts have required not only the participation of ontology engineers but also of domain experts. This should help the veracity of the domain knowledge, but not necessarily the engineering of the ontology. In fact, the quality of ontologies varies widely due to absent integration of one or more of such expert competencies (d'Aquin and Gangemi (2011)). Measuring the quality of the resulting ontologies is necessary in order to monitor to which extent and how good methodologies, practices and guidelines are being applied. In the last years, a series of techniques and tools have been developed (see, for instance, Gangemi et al. (2006); Obrst et al. (2007); Vandredic (2010)) The Ontology Summit Communique 2013 (Neuhaus et al. (2013)) identified that such tools and techniques are not widely used in the development of ontologies, what can lead to ontologies of poor quality and, consequently, is an obstacle to the success of ontologies. Some ontology construction methods have developed their own method for evaluating their ontologies, but such methods have not been used to evaluate ontologies developed by others. Indeed, there is a lack of practical experiences and scientific literature about the application of general evaluation methods to ontologies created applying different methodologies and guidelines. In the last years, the ISO 25000 Software Product Quality Requirements and Evaluation standard (SQuaRE) ISO25000 (2005) has been adapted to ontology evaluation with the aim of providing a generic framework for objective, reproducible ontology evaluation. This framework, called OQuaRE, proposes the use of metrics to evaluate the quality characteristics of ontologies. OQuaRE has been successfully applied to the evaluation of different types of ontologies (Duque-Ramos et al. (2013); Bennett et al. (2013)) and has been able to draw conclusions similar to the ones from specific evaluation methods, like the GoodOD guideline (Boeker et al. (2013); Duque-Ramos et al. (2014)). However, the evaluation by external experts also revealed areas of improvement (Duque-Ramos et al. (2013)), including the need for evaluating against clear requirements, which is also a recommendation of the Ontology Summit Communique 2013. The evolution from construction guidelines and methodologies to evaluation metrics requires a deep understanding of the possibilities and limitations of metrics-based evaluation, as well as community efforts, discussion and agreement. This is one of the big challenges in the ontology engineering field for the next years.