2019
DOI: 10.1016/j.procir.2019.03.078
|View full text |Cite
|
Sign up to set email alerts
|

Characterizing Strip Snap in Cold Rolling Process Using Advanced Data Analytics

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 12 publications
(17 citation statements)
references
References 27 publications
0
17
0
Order By: Relevance
“…Data modellers have to choose whether to proceed with the full set or a reduced set, depending on the prediction accuracy of using the reduced set in comparison to using the full set. In a very large dataset with thousands of features, it is good practice to exclude non-informative and irrelevant features first [39] as this can enhance the feature selection process. Other pre-processing tasks that enhance feature selection modelling include using clean instances of the dataset [40] and data type transformation, such as normalization, discretization and nominalization, as would normally be fulfilled prior to classification, clustering and association modelling, for example.…”
Section: Feature Selection Modellingmentioning
confidence: 99%
“…Data modellers have to choose whether to proceed with the full set or a reduced set, depending on the prediction accuracy of using the reduced set in comparison to using the full set. In a very large dataset with thousands of features, it is good practice to exclude non-informative and irrelevant features first [39] as this can enhance the feature selection process. Other pre-processing tasks that enhance feature selection modelling include using clean instances of the dataset [40] and data type transformation, such as normalization, discretization and nominalization, as would normally be fulfilled prior to classification, clustering and association modelling, for example.…”
Section: Feature Selection Modellingmentioning
confidence: 99%
“…Data modellers have to choose whether to proceed with the full set or a reduced set, depending on the prediction accuracy of using the reduced set in comparison to using the full set. In a very large dataset with thousands of features, it is good practice to exclude non-informative and irrelevant features rst [34] as this can enhance the feature selection process. Other pre-processing tasks that enhance feature selection modelling include using clean instances of the dataset [35] and data type transformation, such as normalization, discretization and nominalization, as would normally be ful lled prior to classi cation, clustering and association modelling, for example.…”
Section: Feature Selection Modellingmentioning
confidence: 99%
“…Data modellers have to choose whether to proceed with the full set or a reduced set, depending on the prediction accuracy of using the reduced set in comparison to using the full set. In a very large dataset with thousands of features, it is good practice to exclude non-informative and irrelevant features rst [40] as this can enhance the feature selection process. Other pre-processing tasks that enhance feature selection modelling include using clean instances of the dataset [41] and data type transformation, such as normalization, discretization and nominalization, as would normally be ful lled prior to classi cation, clustering and association modelling, for example.…”
Section: Feature Selection Modellingmentioning
confidence: 99%