Reinforcement learning for data cleaning and data preparation

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Data cleaning and data preparation have been long-standing challenges in data science to avoid incorrect results, biases, and misleading conclusions obtained from "dirty" data. For a given dataset and a given analytics task, a plethora of data preprocessing techniques and alternative data cleaning strategies are available, but they may lead to dramatically different outputs with unequal ML model quality performances. For adequate data preparation, the users generally do not know how to start or which methods to use. Most current work focus either on proposing new data cleaning algorithms -often specific to certain types of data glitches considered in isolation and generally with no "pipeline vision" of the whole data preprocessing sequence- or on developing automated machine learning approaches (AutoML) that can optimize the hyper-parameters of the given ML model but that often rely on by-default preprocessing methods. We argue that more efforts should be devoted to proposing a principled data preparation approach to help and learn from the users for selecting the optimal sequence of data curation tasks and obtain the best quality performance of the final result. In this abstract, we present Learn2Clean1, a method based on Q-Learning, a model-free reinforcement learning technique that selects, for a given dataset, a given ML model, and a quality performance metric, the optimal sequence of tasks for preprocessing the data such that the quality metric is maximized. Learn2Clean has been presented in The Web Conf 2019 [1] and we will discuss Learn2Clean enhancements for semi-automated data preparation guided by the user.

Original languageEnglish
Title of host publicationProceedings of the Workshop on Human-In-the-Loop Data Analytics, HILDA 2019
PublisherAssociation for Computing Machinery
ISBN (Electronic)9781450367912
Publication statusPublished - 5 Jul 2019
Event2019 Workshop on Human-In-the-Loop Data Analytics, HILDA 2019, co-located with SIGMOD 2019 - Amsterdam, Netherlands
Duration: 5 Jul 2019 → …

Publication series

NameProceedings of the ACM SIGMOD International Conference on Management of Data
ISSN (Print)0730-8078

Conference

Conference2019 Workshop on Human-In-the-Loop Data Analytics, HILDA 2019, co-located with SIGMOD 2019
CountryNetherlands
CityAmsterdam
Period5/7/19 → …

Fingerprint

Reinforcement learning
Cleaning
Learning systems
Pipelines

ASJC Scopus subject areas

  • Software
  • Information Systems

Cite this

Berti-Equille, L. (2019). Reinforcement learning for data cleaning and data preparation. In Proceedings of the Workshop on Human-In-the-Loop Data Analytics, HILDA 2019 (Proceedings of the ACM SIGMOD International Conference on Management of Data). Association for Computing Machinery.

Reinforcement learning for data cleaning and data preparation. / Berti-Equille, Laure.

Proceedings of the Workshop on Human-In-the-Loop Data Analytics, HILDA 2019. Association for Computing Machinery, 2019. (Proceedings of the ACM SIGMOD International Conference on Management of Data).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Berti-Equille, L 2019, Reinforcement learning for data cleaning and data preparation. in Proceedings of the Workshop on Human-In-the-Loop Data Analytics, HILDA 2019. Proceedings of the ACM SIGMOD International Conference on Management of Data, Association for Computing Machinery, 2019 Workshop on Human-In-the-Loop Data Analytics, HILDA 2019, co-located with SIGMOD 2019, Amsterdam, Netherlands, 5/7/19.
Berti-Equille L. Reinforcement learning for data cleaning and data preparation. In Proceedings of the Workshop on Human-In-the-Loop Data Analytics, HILDA 2019. Association for Computing Machinery. 2019. (Proceedings of the ACM SIGMOD International Conference on Management of Data).
Berti-Equille, Laure. / Reinforcement learning for data cleaning and data preparation. Proceedings of the Workshop on Human-In-the-Loop Data Analytics, HILDA 2019. Association for Computing Machinery, 2019. (Proceedings of the ACM SIGMOD International Conference on Management of Data).
@inproceedings{7691bf3bb6ad471497ceb81127ac5720,
title = "Reinforcement learning for data cleaning and data preparation",
abstract = "Data cleaning and data preparation have been long-standing challenges in data science to avoid incorrect results, biases, and misleading conclusions obtained from {"}dirty{"} data. For a given dataset and a given analytics task, a plethora of data preprocessing techniques and alternative data cleaning strategies are available, but they may lead to dramatically different outputs with unequal ML model quality performances. For adequate data preparation, the users generally do not know how to start or which methods to use. Most current work focus either on proposing new data cleaning algorithms -often specific to certain types of data glitches considered in isolation and generally with no {"}pipeline vision{"} of the whole data preprocessing sequence- or on developing automated machine learning approaches (AutoML) that can optimize the hyper-parameters of the given ML model but that often rely on by-default preprocessing methods. We argue that more efforts should be devoted to proposing a principled data preparation approach to help and learn from the users for selecting the optimal sequence of data curation tasks and obtain the best quality performance of the final result. In this abstract, we present Learn2Clean1, a method based on Q-Learning, a model-free reinforcement learning technique that selects, for a given dataset, a given ML model, and a quality performance metric, the optimal sequence of tasks for preprocessing the data such that the quality metric is maximized. Learn2Clean has been presented in The Web Conf 2019 [1] and we will discuss Learn2Clean enhancements for semi-automated data preparation guided by the user.",
author = "Laure Berti-Equille",
year = "2019",
month = "7",
day = "5",
language = "English",
series = "Proceedings of the ACM SIGMOD International Conference on Management of Data",
publisher = "Association for Computing Machinery",
booktitle = "Proceedings of the Workshop on Human-In-the-Loop Data Analytics, HILDA 2019",

}

TY - GEN

T1 - Reinforcement learning for data cleaning and data preparation

AU - Berti-Equille, Laure

PY - 2019/7/5

Y1 - 2019/7/5

N2 - Data cleaning and data preparation have been long-standing challenges in data science to avoid incorrect results, biases, and misleading conclusions obtained from "dirty" data. For a given dataset and a given analytics task, a plethora of data preprocessing techniques and alternative data cleaning strategies are available, but they may lead to dramatically different outputs with unequal ML model quality performances. For adequate data preparation, the users generally do not know how to start or which methods to use. Most current work focus either on proposing new data cleaning algorithms -often specific to certain types of data glitches considered in isolation and generally with no "pipeline vision" of the whole data preprocessing sequence- or on developing automated machine learning approaches (AutoML) that can optimize the hyper-parameters of the given ML model but that often rely on by-default preprocessing methods. We argue that more efforts should be devoted to proposing a principled data preparation approach to help and learn from the users for selecting the optimal sequence of data curation tasks and obtain the best quality performance of the final result. In this abstract, we present Learn2Clean1, a method based on Q-Learning, a model-free reinforcement learning technique that selects, for a given dataset, a given ML model, and a quality performance metric, the optimal sequence of tasks for preprocessing the data such that the quality metric is maximized. Learn2Clean has been presented in The Web Conf 2019 [1] and we will discuss Learn2Clean enhancements for semi-automated data preparation guided by the user.

AB - Data cleaning and data preparation have been long-standing challenges in data science to avoid incorrect results, biases, and misleading conclusions obtained from "dirty" data. For a given dataset and a given analytics task, a plethora of data preprocessing techniques and alternative data cleaning strategies are available, but they may lead to dramatically different outputs with unequal ML model quality performances. For adequate data preparation, the users generally do not know how to start or which methods to use. Most current work focus either on proposing new data cleaning algorithms -often specific to certain types of data glitches considered in isolation and generally with no "pipeline vision" of the whole data preprocessing sequence- or on developing automated machine learning approaches (AutoML) that can optimize the hyper-parameters of the given ML model but that often rely on by-default preprocessing methods. We argue that more efforts should be devoted to proposing a principled data preparation approach to help and learn from the users for selecting the optimal sequence of data curation tasks and obtain the best quality performance of the final result. In this abstract, we present Learn2Clean1, a method based on Q-Learning, a model-free reinforcement learning technique that selects, for a given dataset, a given ML model, and a quality performance metric, the optimal sequence of tasks for preprocessing the data such that the quality metric is maximized. Learn2Clean has been presented in The Web Conf 2019 [1] and we will discuss Learn2Clean enhancements for semi-automated data preparation guided by the user.

UR - http://www.scopus.com/inward/record.url?scp=85072808301&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85072808301&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85072808301

T3 - Proceedings of the ACM SIGMOD International Conference on Management of Data

BT - Proceedings of the Workshop on Human-In-the-Loop Data Analytics, HILDA 2019

PB - Association for Computing Machinery

ER -