Detecting data errors

Where are we and what needs to be done?

Ziawasch Abedjan, Xu Chu, Dong Deng, Raul Castro Fernandez, Ihab F. Ilyas, Mourad Ouzzani, Paolo Papotti, Michael Stonebraker, Nan Tang

Research output: Contribution to journalArticle

38 Citations (Scopus)

Abstract

Data cleaning has played a critical role in ensuring data quality for enterprise applications. Naturally, there has been extensive research in this area, and many data cleaning algorithms have been translated into tools to detect and to possibly repair certain classes of errors such as outliers, duplicates, missing values, and violations of integrity constraints. Since different types of errors may coexist in the same data set, we often need to run more than one kind of tool. In this paper, we investigate two pragmatic questions: (1) are these tools robust enough to capture most errors in real-world data sets? and (2) what is the best strategy to holistically run multiple tools to optimize the detection effort? To answer these two questions, we obtained multiple data cleaning tools that utilize a variety of error detection techniques. We also collected five real-world data sets, for which we could obtain both the raw data and the ground truth on existing errors. In this paper, we report our experimental findings on the errors detected by the tools we tested. First, we show that the coverage of each tool is well below 100%. Second, we show that the order in which multiple tools are run makes a big difference. Hence, we propose a holistic multi-tool strategy that orders the invocations of the available tools to maximize their benefit, while minimizing human effort in verifying results. Third, since this holistic approach still does not lead to acceptable error coverage, we discuss two simple strategies that have the potential to improve the situation, namely domain specific tools and data enrichment. We close this paper by reasoning about the errors that are not detectable by any of the tools we tested.

Original languageEnglish
Pages (from-to)993-1004
Number of pages12
JournalProceedings of the VLDB Endowment
Volume9
Issue number12
Publication statusPublished - 2016

Fingerprint

Cleaning
Error detection
Repair
Industry

ASJC Scopus subject areas

  • Computer Science (miscellaneous)
  • Computer Science(all)

Cite this

Abedjan, Z., Chu, X., Deng, D., Fernandez, R. C., Ilyas, I. F., Ouzzani, M., ... Tang, N. (2016). Detecting data errors: Where are we and what needs to be done? Proceedings of the VLDB Endowment, 9(12), 993-1004.

Detecting data errors : Where are we and what needs to be done? / Abedjan, Ziawasch; Chu, Xu; Deng, Dong; Fernandez, Raul Castro; Ilyas, Ihab F.; Ouzzani, Mourad; Papotti, Paolo; Stonebraker, Michael; Tang, Nan.

In: Proceedings of the VLDB Endowment, Vol. 9, No. 12, 2016, p. 993-1004.

Research output: Contribution to journalArticle

Abedjan, Z, Chu, X, Deng, D, Fernandez, RC, Ilyas, IF, Ouzzani, M, Papotti, P, Stonebraker, M & Tang, N 2016, 'Detecting data errors: Where are we and what needs to be done?', Proceedings of the VLDB Endowment, vol. 9, no. 12, pp. 993-1004.
Abedjan Z, Chu X, Deng D, Fernandez RC, Ilyas IF, Ouzzani M et al. Detecting data errors: Where are we and what needs to be done? Proceedings of the VLDB Endowment. 2016;9(12):993-1004.
Abedjan, Ziawasch ; Chu, Xu ; Deng, Dong ; Fernandez, Raul Castro ; Ilyas, Ihab F. ; Ouzzani, Mourad ; Papotti, Paolo ; Stonebraker, Michael ; Tang, Nan. / Detecting data errors : Where are we and what needs to be done?. In: Proceedings of the VLDB Endowment. 2016 ; Vol. 9, No. 12. pp. 993-1004.
@article{04810f9b15f94c2aa1fc9e593efe3c69,
title = "Detecting data errors: Where are we and what needs to be done?",
abstract = "Data cleaning has played a critical role in ensuring data quality for enterprise applications. Naturally, there has been extensive research in this area, and many data cleaning algorithms have been translated into tools to detect and to possibly repair certain classes of errors such as outliers, duplicates, missing values, and violations of integrity constraints. Since different types of errors may coexist in the same data set, we often need to run more than one kind of tool. In this paper, we investigate two pragmatic questions: (1) are these tools robust enough to capture most errors in real-world data sets? and (2) what is the best strategy to holistically run multiple tools to optimize the detection effort? To answer these two questions, we obtained multiple data cleaning tools that utilize a variety of error detection techniques. We also collected five real-world data sets, for which we could obtain both the raw data and the ground truth on existing errors. In this paper, we report our experimental findings on the errors detected by the tools we tested. First, we show that the coverage of each tool is well below 100{\%}. Second, we show that the order in which multiple tools are run makes a big difference. Hence, we propose a holistic multi-tool strategy that orders the invocations of the available tools to maximize their benefit, while minimizing human effort in verifying results. Third, since this holistic approach still does not lead to acceptable error coverage, we discuss two simple strategies that have the potential to improve the situation, namely domain specific tools and data enrichment. We close this paper by reasoning about the errors that are not detectable by any of the tools we tested.",
author = "Ziawasch Abedjan and Xu Chu and Dong Deng and Fernandez, {Raul Castro} and Ilyas, {Ihab F.} and Mourad Ouzzani and Paolo Papotti and Michael Stonebraker and Nan Tang",
year = "2016",
language = "English",
volume = "9",
pages = "993--1004",
journal = "Proceedings of the VLDB Endowment",
issn = "2150-8097",
publisher = "Very Large Data Base Endowment Inc.",
number = "12",

}

TY - JOUR

T1 - Detecting data errors

T2 - Where are we and what needs to be done?

AU - Abedjan, Ziawasch

AU - Chu, Xu

AU - Deng, Dong

AU - Fernandez, Raul Castro

AU - Ilyas, Ihab F.

AU - Ouzzani, Mourad

AU - Papotti, Paolo

AU - Stonebraker, Michael

AU - Tang, Nan

PY - 2016

Y1 - 2016

N2 - Data cleaning has played a critical role in ensuring data quality for enterprise applications. Naturally, there has been extensive research in this area, and many data cleaning algorithms have been translated into tools to detect and to possibly repair certain classes of errors such as outliers, duplicates, missing values, and violations of integrity constraints. Since different types of errors may coexist in the same data set, we often need to run more than one kind of tool. In this paper, we investigate two pragmatic questions: (1) are these tools robust enough to capture most errors in real-world data sets? and (2) what is the best strategy to holistically run multiple tools to optimize the detection effort? To answer these two questions, we obtained multiple data cleaning tools that utilize a variety of error detection techniques. We also collected five real-world data sets, for which we could obtain both the raw data and the ground truth on existing errors. In this paper, we report our experimental findings on the errors detected by the tools we tested. First, we show that the coverage of each tool is well below 100%. Second, we show that the order in which multiple tools are run makes a big difference. Hence, we propose a holistic multi-tool strategy that orders the invocations of the available tools to maximize their benefit, while minimizing human effort in verifying results. Third, since this holistic approach still does not lead to acceptable error coverage, we discuss two simple strategies that have the potential to improve the situation, namely domain specific tools and data enrichment. We close this paper by reasoning about the errors that are not detectable by any of the tools we tested.

AB - Data cleaning has played a critical role in ensuring data quality for enterprise applications. Naturally, there has been extensive research in this area, and many data cleaning algorithms have been translated into tools to detect and to possibly repair certain classes of errors such as outliers, duplicates, missing values, and violations of integrity constraints. Since different types of errors may coexist in the same data set, we often need to run more than one kind of tool. In this paper, we investigate two pragmatic questions: (1) are these tools robust enough to capture most errors in real-world data sets? and (2) what is the best strategy to holistically run multiple tools to optimize the detection effort? To answer these two questions, we obtained multiple data cleaning tools that utilize a variety of error detection techniques. We also collected five real-world data sets, for which we could obtain both the raw data and the ground truth on existing errors. In this paper, we report our experimental findings on the errors detected by the tools we tested. First, we show that the coverage of each tool is well below 100%. Second, we show that the order in which multiple tools are run makes a big difference. Hence, we propose a holistic multi-tool strategy that orders the invocations of the available tools to maximize their benefit, while minimizing human effort in verifying results. Third, since this holistic approach still does not lead to acceptable error coverage, we discuss two simple strategies that have the potential to improve the situation, namely domain specific tools and data enrichment. We close this paper by reasoning about the errors that are not detectable by any of the tools we tested.

UR - http://www.scopus.com/inward/record.url?scp=85013662261&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85013662261&partnerID=8YFLogxK

M3 - Article

VL - 9

SP - 993

EP - 1004

JO - Proceedings of the VLDB Endowment

JF - Proceedings of the VLDB Endowment

SN - 2150-8097

IS - 12

ER -