- Trending Categories
Data Structure
Networking
RDBMS
Operating System
Java
MS Excel
iOS
HTML
CSS
Android
Python
C Programming
C++
C#
MongoDB
MySQL
Javascript
PHP
Physics
Chemistry
Biology
Mathematics
English
Economics
Psychology
Social Studies
Fashion Studies
Legal Studies
- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who
What is a Decision Tree?
A decision tree is a flow-chart-like tree mechanism, where each internal node indicates a test on an attribute, each department defines an outcome of the test, and leaf nodes describe classes or class distributions. The highest node in a tree is the root node.
Algorithms for learning Decision Trees
Algorithm − Create a decision tree from the given training information.
Input − The training samples, samples, described by discrete-valued attributes; the set of students attributes, attribute-list.
Output − A decision tree.
Method
Create a node N;
If samples are all of the same class, C then
Return N as a leaf node labeled with the class C
If the attribute-list is null then
Return N as a leaf node labeled with the most common class in samples. // majority voting
Select test-attribute, the attribute among attribute-list with the highest information gain.
Label node N with test attribute.
For each known value ai of test-attribute // partition the samples.
Grow a branch from node N for the condition test-attribute= ai.
Let si be the set of samples in samples for which test-attribute= ai.
If si is empty then
It can connect a leaf labeled with the most common class in samples.
Else attach the node returned by Generate decision tree ( si,attribute-list - test-attribute)
Decision Tree Induction
The automatic production of decision rules for instance is referred to as rule induction or automatic rule induction. It can be creating decision rules in the implicit design of a decision tree are also frequently known as rule induction, but the terms tree induction or decision tree inductions are constantly chosen.
The basic algorithm for decision tree induction is a greedy algorithm. It is used to generate decision trees in a top-down recursive divide-and-conquer manner. The basic algorithm for learning decision trees, is a form of ID3, a famous decision tree induction algorithm.
The basic methods are as follows −
The tree begins as an individual node defining the training samples.
If the samples are all of similar classes, then the node turns into a leaf and is labeled with that class.
The algorithm applies an entropy-based measure referred to as information gain as a heuristic for choosing the attribute that will divide the samples into single classes. This attribute develops into the “test” or “decision” attribute at the node. In this form of the algorithm, all attributes are categorical, i.e., discrete-valued.Continuous valued attributes should be discretized.
A department is generated for each known value of the test attribute, and the samples are division appropriately.
The algorithm uses a similar process looping to form a decision tree for the samples at each separation. Because an attribute has appeared at a node, it is required not to be treated in some of the node’s descendants.
- Related Articles
- What is the utility of Decision Tree Analysis?
- How to construct a decision tree?
- What are the characteristics of Decision tree induction?
- Decision tree implementation using Python
- Decision Tree Analysis for Sequential Investment Decisions
- What Is Data-Driven Decision-Making?
- How can decision tree be used to implement a regressor in Python?
- How can decision tree be used to construct a classifier in Python?
- What is the decision problem in TOC?
- What is meant by Consumer Decision-Making?
- What is Management Decision Making and its Types?
- How can the data be visualized to support interactive decision tree construction?
- What is machine learning? How it is helpful for decision making?
- What is a Derivation tree in TOC?
- What is Syntax Tree?
