Knowledge extraction through machine learning techniques has been successfully applied in a large number of application domains. However, apart from the required technical knowledge and background in the application domain, it usually involves a number of time-consuming and repetitive steps. Automated machine learning (AutoML) emerged in 2014 as an attempt to mitigate these issues, making machine learning methods more practicable to both data scientists and domain experts. AutoML is a broad area encompassing a wide range of approaches aimed at addressing a diversity of tasks over the different phases of the knowledge discovery process being automated with specific techniques. To provide a big picture of the whole area, we have conducted a systematic literature review based on a proposed taxonomy that permits categorising 447 primary studies selected from a search of 31,048 papers. This review performs an extensive and rigorous analysis of the AutoML field, scrutinising how the primary studies have addressed the dimensions of the taxonomy, and identifying any gaps that remain unexplored as well as potential future trends. The analysis of these studies has yielded some intriguing findings. For instance, we have observed a significant growth in the number of publications since 2018. Additionally, it is noteworthy that the algorithm selection problem has gradually been superseded by the challenge of workflow composition, which automates more than one phase of the knowledge discovery process simultaneously. Of all the tasks in AutoML, the growth of neural architecture search is particularly noticeable.
Multi‐objective optimization problems frequently appear in many diverse research areas and application domains. Metaheuristics, as efficient techniques to solve them, need to be easily accessible to users with different expertise and programming skills. In this context, metaheuristic optimization frameworks are helpful, as they provide popular algorithms, customizable components and additional facilities to conduct experiments. Due to the broad range of available tools, this paper presents a systematic evaluation and experimental comparison of 10 frameworks, covering from multi‐purpose, consolidated tools to recent libraries specifically designed for multi‐objective optimization. The evaluation is organized around seven characteristics: search components and techniques, configuration, execution, utilities, external support and community, software implementation and performance. An analysis of code metrics and a series of experiments serves to assess the last two features. Lesson learned and open issues are also discussed as part of the comparative study. The outcomes of the evaluation process reveal a contrasted support to recent advances in multi‐objective optimization, with a lack of novel algorithms and variety of metaheuristics other than evolutionary algorithms. The experimental comparison also reports significant differences in terms of both execution time and memory usage under demanding configurations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.