2023
DOI: 10.1016/j.cossms.2023.101057
|View full text |Cite
|
Sign up to set email alerts
|

Rational design of high-entropy ceramics based on machine learning – A critical review

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(5 citation statements)
references
References 107 publications
0
5
0
Order By: Relevance
“…To study the synthesizability of HEB 6 , seven commonly used ML models, including k-nearest neighbors (KNN), decision trees (DT), logistic regression (LR), Gaussian Naive Bayes (GNB), support vector machine with a lin-ear kernel (SVM.linear), support vector machine with a polynomial kernel (SVM.poly), and support vector machine with a radial basis function kernel (SVM.rbf), were applied. 31 The training dataset was collected from the results of high-throughput synthesis experiments on 100 equimolar quinary HEB 6 samples (see Table S1) and 20 features based on the fundamental parameters of constituent AEEs, REEs, and hexaborides (see Table 1). It should be noted that the single-and multi-phase HEB 6 samples were signed as "1" and "0", respectively, in the ML.…”
Section: Machine Learning Frameworkmentioning
confidence: 99%
“…To study the synthesizability of HEB 6 , seven commonly used ML models, including k-nearest neighbors (KNN), decision trees (DT), logistic regression (LR), Gaussian Naive Bayes (GNB), support vector machine with a lin-ear kernel (SVM.linear), support vector machine with a polynomial kernel (SVM.poly), and support vector machine with a radial basis function kernel (SVM.rbf), were applied. 31 The training dataset was collected from the results of high-throughput synthesis experiments on 100 equimolar quinary HEB 6 samples (see Table S1) and 20 features based on the fundamental parameters of constituent AEEs, REEs, and hexaborides (see Table 1). It should be noted that the single-and multi-phase HEB 6 samples were signed as "1" and "0", respectively, in the ML.…”
Section: Machine Learning Frameworkmentioning
confidence: 99%
“…Figure a illustrates the ML workflow for forward prediction, which comprises data collection, database organization, feature engineering, model training, application, and experimental validation. [ 158 ] Data collection is arguably the most important as the performance of the ML model depends heavily on the quality and quantity of the data. Data can be obtained from publicly available resources or be self‐generated, e.g., by carrying out experiments or numerical simulations.…”
Section: Challenges Ahead: Identifying and Addressing Problemsmentioning
confidence: 99%
“…According to the obtaining pathways, the input features can be classified into three categories, including the atomic/precursory-based descriptors, the experienceindependent features, and the characteristics acquired by high-throughput simulation. 41 Here, in order to avoid time-consuming and expensive high-throughput calculations, the first two types of features were selected to form the feature space where the first type descriptors consist of anion-to-cation radius ratio (𝑟 𝐴 ∕𝑟 𝐶 ), difference in Pauling electronegativity (Δ𝑥 Pauling ), 42 difference in Mulliken electronegativity (Δ𝑥 Mulliken ), 43 entropy of mixing (Δ𝑆 mix ) 44 as well as atomic size mismatch (Δ𝛿), 42 and the corresponding calculation formulas are as follows 31,45 :…”
Section: Data Collection and Feature Selectionmentioning
confidence: 99%
“…In this work, the dataset consists of 112 experimental data, which were summarized by Akrami et al., 2 as can be seen in Table S1. According to the obtaining pathways, the input features can be classified into three categories, including the atomic/precursory‐based descriptors, the experience‐independent features, and the characteristics acquired by high‐throughput simulation 41 . Here, in order to avoid time‐consuming and expensive high‐throughput calculations, the first two types of features were selected to form the feature space where the first type descriptors consist of anion‐to‐cation radius ratio (rA/rC${r_A}/{r_C}$), difference in Pauling electronegativity (normalΔxPauling$\Delta {x_{{\mathrm{Pauling}}}}$), 42 difference in Mulliken electronegativity (normalΔxMulliken$\Delta {x_{{\mathrm{Mulliken}}}}$), 43 entropy of mixing (normalΔSmix$\Delta {S_{{\mathrm{mix}}}}$) 44 as well as atomic size mismatch (normalΔδ$\Delta \delta $), 42 and the corresponding calculation formulas are as follows 31,45 : rA/rCgoodbreak=Radius0.28emof0.28emanionRadius0.28emof0.28emcation,$$\begin{equation}{r_A}/{r_C} = \frac{{{\mathrm{Radius\;of\;anion}}}}{{{\mathrm{Radius\;of\;cation}}}},\end{equation}$$ normalΔxPaulingbadbreak=i=1ncifalse(xitruex¯pfalse)2,$$\begin{equation}\Delta {x_{{\mathrm{Pauling}}}} = \sqrt {\mathop \sum \limits_{i = 1}^n {c_i}{{( {{x_i} - {{\bar x}_p}} )}^2}} ,\end{equation}$$…”
Section: Machine Learningmentioning
confidence: 99%