2013
DOI: 10.1109/tii.2012.2226897
|View full text |Cite
|
Sign up to set email alerts
|

Data-Driven Time Discrete Models for Dynamic Prediction of the Hot Metal Silicon Content in the Blast Furnace—A Review

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
57
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 107 publications
(58 citation statements)
references
References 47 publications
0
57
0
Order By: Relevance
“…The sampling time of most of these input variables is 1 minute. According to expert experience and correlation analysis, 30) the time difference between the silicon content and input variables can be selected. For example, the time difference between the silicon content and the top pressure is about 2 h; and the time difference between the silicon content and the gas permeability is about 1 h.…”
Section: Industrial Silicon Content Predictionmentioning
confidence: 99%
See 2 more Smart Citations
“…The sampling time of most of these input variables is 1 minute. According to expert experience and correlation analysis, 30) the time difference between the silicon content and input variables can be selected. For example, the time difference between the silicon content and the top pressure is about 2 h; and the time difference between the silicon content and the gas permeability is about 1 h.…”
Section: Industrial Silicon Content Predictionmentioning
confidence: 99%
“…A recent overview of black-box models for short-term silicon content prediction in blast furnaces can be referred to. 30) Without substantial understanding of the complicated phenomenology, the data-driven soft sensor models can be built in a quick manner. [30][31][32] Among them, SVR and LSSVR have shown promising prediction performance, especially when the training data are insufficient.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Its main idea is to embed the inputs into a feature space through a high-dimension mapping, so that an optimal decision hyperplane can be found among the high-dimension embedded data points [15]. In order to find a decision rule with good generalization capability, the socalled support vectors (SVs), including a small subset of the training data, are selected to support the optimal hyperplane [16].…”
Section: Introductionmentioning
confidence: 99%
“…Support vector machine (SVM) is a popular machine learning method. Its main idea is to embed the inputs into a feature space through a high dimension mapping, and then find an optimal decision hyperplane among the high dimension embedded data points [19]. The essence of SVM is a structure risk minimization principle.…”
Section: Introductionmentioning
confidence: 99%