2013
DOI: 10.1080/02664763.2013.868418
|View full text |Cite
|
Sign up to set email alerts
|

Procedures for the identification of multiple influential observations in linear regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
30
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 25 publications
(30 citation statements)
references
References 24 publications
0
30
0
Order By: Relevance
“…Martin, Roberts, and Zheng (2010) implemented delete-2 and delete-3 jackknife procedures and noted that delete-j approaches for j ≥ 4 will become workable with increased computing power. Another approach to identifying multiple influential observations, proposed by Nurunnabie, Nasser (2011) andNurunnabie, Hadi, andImon (2014), is the development of "group" influence measures that can be applied to a group of observations suspected of being influential. In the approaches proposed by Nurunnabie et al, the suspected group of observations is assumed to be provided by the data analyst or identified by some method for multivariate outlier detection.…”
Section: Introductionmentioning
confidence: 99%
“…Martin, Roberts, and Zheng (2010) implemented delete-2 and delete-3 jackknife procedures and noted that delete-j approaches for j ≥ 4 will become workable with increased computing power. Another approach to identifying multiple influential observations, proposed by Nurunnabie, Nasser (2011) andNurunnabie, Hadi, andImon (2014), is the development of "group" influence measures that can be applied to a group of observations suspected of being influential. In the approaches proposed by Nurunnabie et al, the suspected group of observations is assumed to be provided by the data analyst or identified by some method for multivariate outlier detection.…”
Section: Introductionmentioning
confidence: 99%
“…However, an excessively large value of ϵ can create swamping. Masking occurs when an outlying case is unidentified and misclassified as a good one, and swamping occurs when regular observations are incorrectly identified as outliers [15,43]. Experience of MLS data reveals that generally, the majority or more than 50% of points are inliers within a local neighbourhood.…”
Section: Proposed Methods For Outlier Detection and Robust Saliency Fmentioning
confidence: 96%
“…Hido et al [26] pointed that the solutions of the One-class Support Vector Machine (OSVM) and SVDD depend heavily on the choice of the tunning parameters and there seems to be no reasonable method to appropriately fix the values of the tuning parameters. Several survey papers [15,18,20,43] have been published in the last decade that explored a variety of algorithms covering the full range of statistics, machine learning and data mining techniques. Hodges and Austin [18] concluded: there is no single universally applicable or generic outlier detection approach.…”
Section: Outlier Detection and Robust Methodsmentioning
confidence: 99%
“…The twice-the-mean rule and thrice-the-mean rule on the diagonal elements of the hat matrix have been reported in the literature to identify the leverage points. Reference [23] has mentioned the Cook's distance and Welsch and Kuh's distance to detect and identify the single leverage point. The Mahalanobis distance based on the projection pursuit algorithm for minimum volume ellipsoid cannot be applied to sparse systems.…”
Section: Detection Of Leverage and Bad Data Pointsmentioning
confidence: 99%
“…will not be able to identify the real high leverage points. Nurunnabi, Hadi and Imon in [23] have used a modified Cook's distance and Habshah, Norazan and Imon [24] have proposed a robust diagnostic potential to address this issue. Reference [24] have applied the technique on Hawkins, Bradu and Kass data and Brownlee's stack loss data to illustrate the simultaneous identification of outliers or erroneous data and high leverage points.…”
mentioning
confidence: 99%