2020
DOI: 10.33039/ami.2020.10.001
|View full text |Cite
|
Sign up to set email alerts
|

Fuzzification of training data class membership binary values for neural network algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…To further advance the field and enhance model performance, future research could explore the following developments: employing segmentation techniques to refine the input data, exploring the potential benefits of using fuzzification techniques for refining binary class membership values during model training, investigating alternative ensemble methods that may provide additional benefits, experimenting with different datasets to assess the robustness of the models, and incorporating various machine learning models, such as recurrent neural networks, to address the specific challenges of the task [4,11,12]. While the results presented in this study are promising, it is important to conduct additional research and analysis to fully comprehend the behavior and potential of these models, as this understanding can ultimately lead to more accurate and reliable methods.…”
Section: Discussionmentioning
confidence: 99%
“…To further advance the field and enhance model performance, future research could explore the following developments: employing segmentation techniques to refine the input data, exploring the potential benefits of using fuzzification techniques for refining binary class membership values during model training, investigating alternative ensemble methods that may provide additional benefits, experimenting with different datasets to assess the robustness of the models, and incorporating various machine learning models, such as recurrent neural networks, to address the specific challenges of the task [4,11,12]. While the results presented in this study are promising, it is important to conduct additional research and analysis to fully comprehend the behavior and potential of these models, as this understanding can ultimately lead to more accurate and reliable methods.…”
Section: Discussionmentioning
confidence: 99%
“…The scientific contribution of this work is based on a set of original and improved methods and models for estimating the effort and cost of developing software projects. Through the approach presented, improved methods based on existing models will be using: different data sets, clustering methods 25 and fuzzification methods 26,27 for ANN architecture constructed based on Taguchi's orthogonal vector plan and different activation functions. 28,29 In addition, various metrics were calculated, and criteria set such as mean absolute error (MAE), magnitude relative error (MRE), mean magnitude relative error (MMRE), 30 prediction on three criteria (PRED) 31,32 correlation (Pearson's, Spearman's, and R 2 ).…”
Section: Related Workmentioning
confidence: 99%
“…It is also possible to fuzzify the binary class membership values. These matters are discussed in detail in the context of neural network algorithms in [34], where it is convincingly asserted that machine learning can "think" between false and true.…”
Section: Classification Vs Regressionmentioning
confidence: 99%