IJEACS 2022
DOI: 10.24032/ijeacs/0404/005
|View full text |Cite
|
Sign up to set email alerts
|

Stacked Generalization of Random Forest and Decision Tree Techniques for Library Data Visualization

Abstract: The huge amount of library data stored in our modern research and statistic centers of organizations is springing up on daily bases. These databases grow exponentially in size with respect to time, it becomes exceptionally difficult to easily understand the behavior and interpret data with the relationships that exist between attributes. This exponential growth of data poses new organizational challenges like the conventional record management system infrastructure could no longer cope to give precise and deta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…Therefore, the combiner system is trained using data that was not used to train the base learners. The tough sum rule, simple majority voting, and weighted majority voting are commonly used procedures for combining ensemble classifiers due to their specific guarantees (Ziweritin, 2022). Ho (1995) demonstrated that when random forests are restricted to being sensitive to specific feature dimensions, they can improve their accuracy over time without overtraining.…”
Section: Stack Generalizationmentioning
confidence: 99%
“…Therefore, the combiner system is trained using data that was not used to train the base learners. The tough sum rule, simple majority voting, and weighted majority voting are commonly used procedures for combining ensemble classifiers due to their specific guarantees (Ziweritin, 2022). Ho (1995) demonstrated that when random forests are restricted to being sensitive to specific feature dimensions, they can improve their accuracy over time without overtraining.…”
Section: Stack Generalizationmentioning
confidence: 99%