2007
DOI: 10.1111/j.1751-5823.2005.tb00252.x
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Editing for Business Surveys: An Assessment of Selected Algorithms

Abstract: Statistical offices are responsible for publishing accurate statistical information about many different aspects of society. This task is complicated considerably by the fact that data collected by statistical offices generally contain errors. These errors have to be corrected before reliable statistical information can be published. This correction process is referred to as statistical data editing. Traditionally, data editing was mainly an interactive activity with the aim to correct all data in every detail… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2010
2010
2018
2018

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 45 publications
0
6
0
Order By: Relevance
“…According to this paradigm, the data of a record should be made to satisfy all edits by changing the values of the fewest possible number of variables. This paradigm became the standard on which most systems for automatic editing, such as GEIS ( De Waal and Coutinho (2005).…”
Section: A Brief History Of Statistical Data Editingmentioning
confidence: 99%
“…According to this paradigm, the data of a record should be made to satisfy all edits by changing the values of the fewest possible number of variables. This paradigm became the standard on which most systems for automatic editing, such as GEIS ( De Waal and Coutinho (2005).…”
Section: A Brief History Of Statistical Data Editingmentioning
confidence: 99%
“…We refer to Ref 4 or 12 for such a formulation and several solution methods. Alternative references are from Ref 13 where an overview of algorithms for solving the Fellegi–Holt‐based error localization problem for numerical data is presented, and from Ref 14 where an algorithm that solves the error localization problem for a combination of categorical and numerical data is described.…”
Section: Automatic Editingmentioning
confidence: 99%
“…This is usually done using a variant of the error localization suggested by Fellegi and Holt (1976), where the values are selected by minimizing the number of fields necessary to turn an erroneous record into a theoretically valid one (Winkler 1995;Winkler and Petkunas 1997). In the imputation step, the values selected in the error localization are replaced with plausibly valid entries (de Waal and Coutinho 2005;de Waal et al 2011). This is usually done by some form of hot deck imputation (Kalton and Kasprzyk 1986;Andridge and Little 2010).…”
Section: Introductionmentioning
confidence: 99%