# What is the CART Pruning Algorithm?

CART is a famous decision tree algorithm first produced by Leo Breiman, Jerome Friedman, Richard Olshen, and Charles Stone in 1984. CART represents Classification and Regression Trees. The CART algorithm improves binary trees and continues divided considering new splits can be found that improves purity.

There are some simpler subtrees, each of which defines a different trade-off among model complexity and training group misclassification rate. The CART algorithm recognizes a group of such subtrees as candidate models. These candidate subtrees are used to the validation group and the tree with the minimum validation set misclassification rate is chosen as the last model.

The CART algorithm recognizes candidate subtrees through a procedure of repeated pruning. The objective is to prune first those branches supporting the least more predictive power per leaf. It can recognize these least beneficial branches, CART based on a concept known as the adjusted error rate.

This is a measure that improves each node’s misclassification cost on the training set by impressive a complexity penalty depends on the multiple leaves in the tree. The adjusted error rate can identify weak branches (those whose misclassification rate is not adequate to overcome the penalty) and indicate them for pruning.

The next task is to choose, from the pool of candidate subtrees, the one that operates best on new record. Each of the candidate subtrees can define the data in the validation set. The tree that implements this task with the lowest complete error rate is defined as the winner. The winning subtree has been pruned adequately to eliminate the effects of overtraining, but not highly as to lose valuable data.

Because this pruning algorithm depends on misclassification rate, without having the probability of each classification into account, it restores some subtree whose leaves all create the same classification with a common parent that also creates that classification.

The objective is to choose a small proportion of the data (the top 1 percent or 10 percent, for instance), this pruning algorithm can hurt the implementation of the tree, because some of the eliminated leaves includes a very high area of the target class. There are various tools, including SAS Enterprise Miner, enables the user to prune trees optimally for such methods.

The winning subtree was chosen on the basis of its complete error rate when used to the task of defining the data in the validation set. It can expect that the selected subtree will continue to be the best implementing subtree when used to multiple datasets, the error rate that generated it to be selected can slightly overstate its strength.