2023
DOI: 10.1016/j.ceja.2023.100545
|View full text |Cite
|
Sign up to set email alerts
|

On the hydrodynamics of macroporous structures: Experimental, CFD and artificial neural network analysis

A.J. Otaru,
Z.A. Alhulaybi,
T.A. Owoseni
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 26 publications
0
2
0
Order By: Relevance
“…A preliminary assessment and sorting of the raw data from the TGA experiment was performed by selecting some data points along the thermograms for the three different heating rates and considering the initial and final degradation temperatures (25 and 600 • C) used in the experiment. As discussed in [29][30][31], the introduction of the hidden neurons (layers) into the framework in Figure 2 was performed to improve convolution and non-linearity between input and output signals that have been determined experimentally.…”
Section: Machine Learning Backpropagation Neural Network and Datamentioning
confidence: 99%
“…A preliminary assessment and sorting of the raw data from the TGA experiment was performed by selecting some data points along the thermograms for the three different heating rates and considering the initial and final degradation temperatures (25 and 600 • C) used in the experiment. As discussed in [29][30][31], the introduction of the hidden neurons (layers) into the framework in Figure 2 was performed to improve convolution and non-linearity between input and output signals that have been determined experimentally.…”
Section: Machine Learning Backpropagation Neural Network and Datamentioning
confidence: 99%
“…There are other non-linear activation functions, such as hyperbolic tangents (TanH), rectified linear units (ReLU), exponential linear units (ELU), etc. However, the choice of Sigmoid activation functions is based on their ability to converge data points that are between 0 and 1 based on changes in arbitrary constants during training [ 20 , 24 , 26 ]. For this reason, the input and output data points were divided by the maximum value possible for a TGA experiment as part of data preparation before training.…”
Section: Machine Learning Algorithms and Datamentioning
confidence: 99%