2023
DOI: 10.3390/pr11072026
|View full text |Cite
|
Sign up to set email alerts
|

Artificial Neural Networks (ANNs) for Vapour-Liquid-Liquid Equilibrium (VLLE) Predictions in N-Octane/Water Blends

Esteban Lopez-Ramirez,
Sandra Lopez-Zamora,
Salvador Escobedo
et al.

Abstract: Blends of bitumen, clay, and quartz in water are obtained from the surface mining of the Athabasca Oil Sands. To facilitate its transportation through pipelines, this mixture is usually diluted with locally produced naphtha. As a result of this, naphtha has to be recovered later, in a naphtha recovery unit (NRU). The NRU process is a complex one and requires the knowledge of Vapour-Liquid-Liquid Equilibrium (VLLE) thermodynamics. The present study uses experimental data, obtained in a CREC-VL-Cell, and Artific… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 25 publications
0
2
0
Order By: Relevance
“…Moreover, each neuron or node in the network is equipped with a bias term that plays a role in shaping the behavior of the individual neurons and contributes to the overall network performance. As described by [39], the weights and biases of a FNN are adjusted via forward and backward calculations during the training phase of the ANN. This can be performed iteratively until a predefined loss function is minimized [37].…”
Section: Artificial Neural Network (Ann)mentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, each neuron or node in the network is equipped with a bias term that plays a role in shaping the behavior of the individual neurons and contributes to the overall network performance. As described by [39], the weights and biases of a FNN are adjusted via forward and backward calculations during the training phase of the ANN. This can be performed iteratively until a predefined loss function is minimized [37].…”
Section: Artificial Neural Network (Ann)mentioning
confidence: 99%
“…The various ANN hyperparameters adopted in the present study, to develop a multiple output FNN, are reported in Table 4. These hy- As described by [39], the weights and biases of a FNN are adjusted via forward and backward calculations during the training phase of the ANN. This can be performed iteratively until a predefined loss function is minimized [37].…”
Section: Artificial Neural Network (Ann)mentioning
confidence: 99%