2008 11th International Conference on Computer and Information Technology 2008
DOI: 10.1109/iccitechn.2008.4803050
|View full text |Cite
|
Sign up to set email alerts
|

Density based clustering technique for efficient data mining

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 7 publications
0
3
0
Order By: Relevance
“…Then the issue of dataset size is addressed by using feature selection to reduce the Kyoto 2006+ dataset from 24 to nine numerical features recognized as the most relevant for classifier evaluation. The literature implies that users who are knowledgeable about their dataset can select features that meet some criteria based on their knowledge and experience [49,50]. According to this principle, feature selection proposed in this paper is performed as follows: (1) Remove all categorical features (17 features are left for model training: 1, 3-17, 24), ( 2) Cut out all statistical features and features for more analyses planned.…”
Section: Resultsmentioning
confidence: 99%
“…Then the issue of dataset size is addressed by using feature selection to reduce the Kyoto 2006+ dataset from 24 to nine numerical features recognized as the most relevant for classifier evaluation. The literature implies that users who are knowledgeable about their dataset can select features that meet some criteria based on their knowledge and experience [49,50]. According to this principle, feature selection proposed in this paper is performed as follows: (1) Remove all categorical features (17 features are left for model training: 1, 3-17, 24), ( 2) Cut out all statistical features and features for more analyses planned.…”
Section: Resultsmentioning
confidence: 99%
“…Table 4 shows a review of the research topics from 2018 to 2022 [12,24,85,91,[96][97][98][99][100][101][102]106]. According to the findings in [107,108], users who gain knowledge about the datasets can choose features that meet specific criteria as a result of their experience.…”
Section: Binary Classificationmentioning
confidence: 99%
“…Table 4 show view of the research topics from 2018 to 2022 [12,24,85,91,[96][97][98][99][100][101][102]106]. According findings in [107,108], users who gain knowledge about the datasets can choose f that meet specific criteria as a result of their experience. Likewise, the feature selection is performed here to eliminate statistical, connection duration and categorical features from the Kyoto 2006+ dataset, along with the features intended to be used in further experiments.…”
Section: Binary Classificationmentioning
confidence: 99%