### Abstract

We propose a new decision tree algorithm, Class Confidence Proportion Decision Tree (CCPDT), which is robust and insensitive to size of classes and generates rules which are statistically significant. In order to make decision trees robust, we begin by expressing Information Gain, the metric used in C4.5, in terms of confidence of a rule. This allows us to immediately explain why Information Gain, like confidence, results in rules which are biased towards the majority class. To overcome this bias, we introduce a new measure, Class Confidence Proportion (CCP), which forms the basis of CCPDT. To generate rules which are statistically significant we design a novel and efficient top-down and bottom-up approach which uses Fisher's exact test to prune branches of the tree which are not statistically significant. Together these two changes yield a classifier that performs statistically better than not only traditional decision trees but also trees learned from data that has been balanced by well known sampling techniques. Our claims are confirmed through extensive experiments and comparisons against C4.5, CART, HDDT and SPARCCC.

Original language | English |
---|---|

Title of host publication | Proceedings of the 10th SIAM International Conference on Data Mining, SDM 2010 |

Pages | 766-777 |

Number of pages | 12 |

Publication status | Published - 2010 |

Externally published | Yes |

Event | 10th SIAM International Conference on Data Mining, SDM 2010 - Columbus, OH Duration: 29 Apr 2010 → 1 May 2010 |

### Other

Other | 10th SIAM International Conference on Data Mining, SDM 2010 |
---|---|

City | Columbus, OH |

Period | 29/4/10 → 1/5/10 |

### Fingerprint

### ASJC Scopus subject areas

- Software

### Cite this

*Proceedings of the 10th SIAM International Conference on Data Mining, SDM 2010*(pp. 766-777)

**A robust decision tree algorithm for imbalanced data sets.** / Liu, Wei; Chawla, Sanjay; Cieslak, David A.; Chawla, Nitesh V.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*Proceedings of the 10th SIAM International Conference on Data Mining, SDM 2010.*pp. 766-777, 10th SIAM International Conference on Data Mining, SDM 2010, Columbus, OH, 29/4/10.

}

TY - GEN

T1 - A robust decision tree algorithm for imbalanced data sets

AU - Liu, Wei

AU - Chawla, Sanjay

AU - Cieslak, David A.

AU - Chawla, Nitesh V.

PY - 2010

Y1 - 2010

N2 - We propose a new decision tree algorithm, Class Confidence Proportion Decision Tree (CCPDT), which is robust and insensitive to size of classes and generates rules which are statistically significant. In order to make decision trees robust, we begin by expressing Information Gain, the metric used in C4.5, in terms of confidence of a rule. This allows us to immediately explain why Information Gain, like confidence, results in rules which are biased towards the majority class. To overcome this bias, we introduce a new measure, Class Confidence Proportion (CCP), which forms the basis of CCPDT. To generate rules which are statistically significant we design a novel and efficient top-down and bottom-up approach which uses Fisher's exact test to prune branches of the tree which are not statistically significant. Together these two changes yield a classifier that performs statistically better than not only traditional decision trees but also trees learned from data that has been balanced by well known sampling techniques. Our claims are confirmed through extensive experiments and comparisons against C4.5, CART, HDDT and SPARCCC.

AB - We propose a new decision tree algorithm, Class Confidence Proportion Decision Tree (CCPDT), which is robust and insensitive to size of classes and generates rules which are statistically significant. In order to make decision trees robust, we begin by expressing Information Gain, the metric used in C4.5, in terms of confidence of a rule. This allows us to immediately explain why Information Gain, like confidence, results in rules which are biased towards the majority class. To overcome this bias, we introduce a new measure, Class Confidence Proportion (CCP), which forms the basis of CCPDT. To generate rules which are statistically significant we design a novel and efficient top-down and bottom-up approach which uses Fisher's exact test to prune branches of the tree which are not statistically significant. Together these two changes yield a classifier that performs statistically better than not only traditional decision trees but also trees learned from data that has been balanced by well known sampling techniques. Our claims are confirmed through extensive experiments and comparisons against C4.5, CART, HDDT and SPARCCC.

UR - http://www.scopus.com/inward/record.url?scp=84873579825&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84873579825&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84873579825

SP - 766

EP - 777

BT - Proceedings of the 10th SIAM International Conference on Data Mining, SDM 2010

ER -