2019
DOI: 10.26434/chemrxiv.11303606
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Benchmarking the Acceleration of Materials Discovery by Sequential Learning

Abstract: Sequential learning (SL) strategies, i.e. iteratively updating a ma-chine learning model to guide experiments, have been proposed to significantly accelerate materials discovery and research. Applications on computational datasets and a handful of optimization experiments have demonstrated the promise of SL, motivating a quantitative evaluation of its ability to accelerate materials discovery, specifically in the case of physical experiments. The benchmarking effort in the present work quantifies the performan… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(15 citation statements)
references
References 27 publications
0
15
0
Order By: Relevance
“…However, most prior work focuses on either optimization which lacks interpretability and transferability when the target changes 5 or inverse design using regression which uses a static dataset 20 . Algorithm selection using information criteria such as Akaike information criterion (AIC) 26 and Bayesian information criterion (BIC) 27 could be used to maximize time-and resource-efficiency of closed-loop laboratories, e.g., by leveraging co-evolution, physicsfusion, and related strategies [28][29] . While it might not be the only possible ML architecture for such a problem, our approach attempts to combine efficient optimization, interpretability and knowledge extraction.…”
Section: Discussionmentioning
confidence: 99%
“…However, most prior work focuses on either optimization which lacks interpretability and transferability when the target changes 5 or inverse design using regression which uses a static dataset 20 . Algorithm selection using information criteria such as Akaike information criterion (AIC) 26 and Bayesian information criterion (BIC) 27 could be used to maximize time-and resource-efficiency of closed-loop laboratories, e.g., by leveraging co-evolution, physicsfusion, and related strategies [28][29] . While it might not be the only possible ML architecture for such a problem, our approach attempts to combine efficient optimization, interpretability and knowledge extraction.…”
Section: Discussionmentioning
confidence: 99%
“…To quantify the acceleration of discovery from BO, we adapt two other metrics similar to the ones from Rohr et al [28].…”
Section: Planning Inferencementioning
confidence: 99%
“…is the ratio of cycle numbers showing how much faster one could reach a specific value Top%(i BO ) = Top%(i random ) = a ∈ [0, 1]. The aggregated performance of BO algorithms is further quantified via EF and AF curves in Figure 3 algorithm selection varies with assigned experiment budget and specific optimization task [28].…”
Section: Planning Inferencementioning
confidence: 99%
See 1 more Smart Citation
“…Theory-guided, high-throughput experimental discovery has recently been applied successfully to complex materials discovery problems in both photoelectrochemistry 183 and flow batteries 184 . The large data sets that are emerging from these studies now offer opportunities for machine learning and prediction of new material combinations with enhanced properties 185 .…”
Section: Sidebar 15: Watching the Oxidation Of Water In Photosystem IImentioning
confidence: 99%