2010
DOI: 10.1504/ijmmm.2010.034486
|View full text |Cite
|
Sign up to set email alerts
|

A review of artificial intelligent approaches applied to part accuracy prediction

Abstract: Nowadays, despite the large volume of worldwide academic research on various aspects of metal cutting the control of workpiece precision still relies on machine-tool operator's experience and trial and error runs. In order to increase the efficiency of machining systems, many empirical models based on Artificial Intelligent (AI) approaches have been proposed in the past, where important process improvements were reported. This paper overviews the AI approaches applied in machining operations to predict part ac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2011
2011
2021
2021

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 22 publications
0
4
0
Order By: Relevance
“…There are different ways to divide the dataset between training and test subsets: the leave-one-out technique (Hall et al 2009), 5 × 2 cross validation (Kohavi 1995) and the 10 × 10 cross validation scheme (Kohavi 1995). Different authors have pointed out that the 10 × 10 cross validation scheme is the most suitable strategy for datasets of the size of the presented in this research (Sick 2002;Abellan-Nebot and Romero Subirón 2010;Teixidor et al 2015;Maudes et al 2017;Oleaga et al 2018), while the leave-one-out technique is specially thought for smaller datasets (Maudes et al 2017) and the 5 × 2 cross validation scheme can be the best solution for datasets of bigger size (Santos et al 2018). In the 10 × 10 cross validation scheme, the dataset is randomly divided into 10 subsets of the same size.…”
Section: Cross Validationmentioning
confidence: 90%
See 2 more Smart Citations
“…There are different ways to divide the dataset between training and test subsets: the leave-one-out technique (Hall et al 2009), 5 × 2 cross validation (Kohavi 1995) and the 10 × 10 cross validation scheme (Kohavi 1995). Different authors have pointed out that the 10 × 10 cross validation scheme is the most suitable strategy for datasets of the size of the presented in this research (Sick 2002;Abellan-Nebot and Romero Subirón 2010;Teixidor et al 2015;Maudes et al 2017;Oleaga et al 2018), while the leave-one-out technique is specially thought for smaller datasets (Maudes et al 2017) and the 5 × 2 cross validation scheme can be the best solution for datasets of bigger size (Santos et al 2018). In the 10 × 10 cross validation scheme, the dataset is randomly divided into 10 subsets of the same size.…”
Section: Cross Validationmentioning
confidence: 90%
“…It is important to highlight that SMOTE in this implementation only increase the number of instances in one class (the lowest-populated one) by duplicating it. Therefore, if many classes present a low number of instances the SMOTE algorithm should be used in more than one iteration and more than one iteration should be done on the same class if the dupli- The use of different metrics to evaluate the model's performance is always an open issue in machine-learning techniques and should be always oriented to the industrial use of the prediction models (Benardos and Vosniakos 2003;Abellan-Nebot and Romero Subirón 2010). For example, in the industrial detection of failures of a certain machine, a very-high accuracy of a classification model-in terms of well-classified instances in the validation subset-could be a disaster if the model is not able to predict the failure-class, because it is optimized for a training dataset where very few instances of the failure class are included.…”
Section: Smote and Metrics For Imbalanced Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…Unfortunately, ANN models are highly dependent on the parameters of the neural networks (Bustillo et al 2011) and the process of fine-tuning these parameters is a highly time-consuming task that frequently requires expertise for good results. Moreover, studies on surface-roughness prediction in face milling are scarce, compared with the large amount of studies focused on this prediction task for other milling operations, as emphasized in reviews of this domain (Chandrasekaran et al 2010;Abellan-Nebot 2010). An imbalance that is perhaps because face milling requires very extensive machining tests to provide data sets, compared with processes that demand less power and torque; therefore the size of these data sets ranges between 50 and 250 instances (Grzenda and Bustillo 2013).…”
Section: Introductionmentioning
confidence: 99%