2023
DOI: 10.2298/jmmb220523006m
|View full text |Cite
|
Sign up to set email alerts
|

Effect of water vapor on the reduction kinetics of hematite powder by hydrogen-water vapor in different stages

Abstract: The powder of hematite sample was isothermally reduced with hydrogen-water vapor gas mixture at 1023K-1273K. The results indicated that the overall reduction process of hematite could be separated into three stages (Fe2O3-Fe3O4-FeO-Fe) to respectively study. At 1023K, the average reaction rate dropped by 53.6% in the stage 1 when the water vapor content of gas reactant rose from 0% to 50%, and it decreased by about 77.2% in the stage 2. However, in the stage 3, when the water vapor content on… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 41 publications
0
2
0
Order By: Relevance
“…In ML models, two types of parameters are present: one type, known as model parameters, can be initialized and updated through the data learning process (e.g., the weights of neurons in neural networks); the other type, referred to as hyperparameters, cannot be directly estimated from data learning and must be set before training an ML model, as they define the architecture of the model. [71,72] Typically, in the context of hyperparameter optimization (HPO), the goal is to achieve the following objective: [73] x à ¼ arg min f ðxÞ x∈X (23) Find the value of x (denoted as x * ) that minimizes the objective function f(x), which can represent various metrics to minimize which in this research represents the mean squared error (MSE) to be minimized. x* represents the hyperparameter configuration that yields the best possible value for f(x), and each hyperparameter x can assume any value within the defined search space X.…”
Section: Hyperparameter Optimization Approachmentioning
confidence: 99%
See 1 more Smart Citation
“…In ML models, two types of parameters are present: one type, known as model parameters, can be initialized and updated through the data learning process (e.g., the weights of neurons in neural networks); the other type, referred to as hyperparameters, cannot be directly estimated from data learning and must be set before training an ML model, as they define the architecture of the model. [71,72] Typically, in the context of hyperparameter optimization (HPO), the goal is to achieve the following objective: [73] x à ¼ arg min f ðxÞ x∈X (23) Find the value of x (denoted as x * ) that minimizes the objective function f(x), which can represent various metrics to minimize which in this research represents the mean squared error (MSE) to be minimized. x* represents the hyperparameter configuration that yields the best possible value for f(x), and each hyperparameter x can assume any value within the defined search space X.…”
Section: Hyperparameter Optimization Approachmentioning
confidence: 99%
“…Extensive research has been conducted on the investigation of various parameters influencing the reduction of iron oxide. Among these parameters, particular emphasis has been placed on the composition of H 2 [ 21 ] and CO [ 22 ] ; the mixture of gases (H 2 –CO, [ 23 ] H 2 –H 2 O, [ 24 ] H 2 –inert gas, [ 25 ] CO–CO 2 , [ 26 ] H 2 –CO–H 2 O–CO 2 [ 27 ] ); temperature variations spanning low (below 600 °C), [ 28 ] moderate (near to 600 °C), [ 21 ] and high (Over 600 °C) [ 29 ] ranges; pressure conditions [ 30 ] ; porosity levels [ 31 ] ; gangue content (SiO 2 , [ 32 ] CaO, [ 33 ] MnO, [ 34 ] CaCO 3 , [ 35 ] Al 2 O 3 [ 34 ] ); pellet radius [ 36 ] ; and volume flow rates. [ 21 ] Ghadi et al [ 37 ] identified particle size and pressure as the most influential parameters, whereas Cavaliere et al [ 38 ] emphasized the critical significance of temperature.…”
Section: Introductionmentioning
confidence: 99%