2013
DOI: 10.1016/j.tca.2012.10.022
|View full text |Cite
|
Sign up to set email alerts
|

Development of an artificial neural network model for the prediction of hydrocarbon density at high-pressure, high-temperature conditions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0
1

Year Published

2014
2014
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 50 publications
(21 citation statements)
references
References 34 publications
0
20
0
1
Order By: Relevance
“…Table 3 gives evaluation results of the models in terms of the values of these criteria. It is important to note that the ANN predictions are optimum if RMSE and MAPE are found to be close to 0, while R 2 are found to be close to 1 (Haghbakhsh et al 2013). According to Table 3, the RMSE values for the test phase were determined as 0,308 and 0,491 in predicting volumetric swelling and shrinkage of samples, respectively.…”
Section: Modeling Resultsmentioning
confidence: 96%
See 3 more Smart Citations
“…Table 3 gives evaluation results of the models in terms of the values of these criteria. It is important to note that the ANN predictions are optimum if RMSE and MAPE are found to be close to 0, while R 2 are found to be close to 1 (Haghbakhsh et al 2013). According to Table 3, the RMSE values for the test phase were determined as 0,308 and 0,491 in predicting volumetric swelling and shrinkage of samples, respectively.…”
Section: Modeling Resultsmentioning
confidence: 96%
“…Among many different kinds of ANNs, the multi-layer perceptron (MLP) is known as the most useful type. It is a feed-forward architecture that is capable of mapping the set of input data onto a set of proper outputs (Haghbakhsh et al 2013). The MLP architecture comprises a combination of input, hidden and output layers.…”
Section: Artificial Neural Network (Anns)mentioning
confidence: 99%
See 2 more Smart Citations
“…Commonly, 80% of the data is used for training and the other 20% is categorized as 10% for optimization and 10% for testing set. [44,45]. Also, it must be mentioned that the optimization algorithm which has been devised to obtain the optimum values of Kernel parameter such as σ 2 during the internal SVM calculations.…”
Section: Svm Simulationsmentioning
confidence: 99%