Decision TreeKey takeawaysInterview QuestionsSolutionsWhat is a Decision Tree, and how does it work?What are the advantages of using Decision Trees for classification or regression tasks?What are the different splitting criteria used in Decision Trees, and how do they affect the tree's construction?How do you handle missing values in a dataset when building a Decision Tree?What is overfitting in the context of Decision Trees, and how can it be addressed?What is pruning, and why is it important in Decision Trees?What are the different measures used to assess the quality of splits in Decision Trees?How do you handle categorical variables in a Decision Tree algorithm?How can you handle continuous or numerical variables in Decision Trees?What are some methods for dealing with imbalanced datasets when using Decision Trees?Can you explain the concept of feature importance in Decision Trees?What are ensemble methods, and how can they be combined with Decision Trees?What is the difference between Random Forests and Gradient Boosting algorithms?How do you determine the optimal depth or size of a Decision Tree?Can you explain the concept of information gain or impurity reduction in Decision Trees?How do you evaluate the performance of a Decision Tree model?Can Decision Trees handle multi-class classification problems?How do you interpret the rules generated by a Decision Tree model?Can Decision Trees handle missing values and outliers during the prediction phase?How can Decision Trees be used for feature selection or variable importance ranking?Python ApplicationUsing SklearnDecisionTreeClassifier()
From Scratch
Feature selection: Choose the most relevant and informative features to build the decision tree. Consider features that have a strong relationship with the target variable.
Splitting criteria: Select an appropriate splitting criterion (e.g., Gini index, entropy) to determine how to divide the data at each node of the tree.
Handling missing values: Decide how to handle missing values in the dataset, whether by imputation or using specific techniques designed for decision trees.
Pruning: Consider pruning techniques, such as cost complexity pruning or reduced error pruning, to prevent overfitting and improve the generalization ability of the decision tree.
Handling categorical variables: Determine how to handle categorical variables in the decision tree algorithm, such as one-hot encoding or label encoding.
Tree depth and complexity: Control the depth and complexity of the decision tree to avoid overfitting. Setting maximum depth or minimum number of samples per leaf can help regulate tree growth.
Interpretability: Leverage the interpretability of decision trees to gain insights into the decision-making process. Decision trees provide transparent and easily understandable rules for classification or regression.
Ensemble methods: Consider using ensemble methods, such as Random Forests or Gradient Boosting, which combine multiple decision trees to improve prediction accuracy and robustness.
Feature importance: Analyze the feature importance provided by the decision tree to identify the most influential features in the classification or regression task.
Regularization and parameter tuning: Explore regularization techniques, such as reducing the maximum number of features or adjusting other hyperparameters, to optimize the performance of the decision tree.
What is a Decision Tree, and how does it work?
What are the advantages of using Decision Trees for classification or regression tasks?
What are the different splitting criteria used in Decision Trees, and how do they affect the tree's construction?
How do you handle missing values in a dataset when building a Decision Tree?
What is overfitting in the context of Decision Trees, and how can it be addressed?
What is pruning, and why is it important in Decision Trees?
What are the different measures used to assess the quality of splits in Decision Trees?
How do you handle categorical variables in a Decision Tree algorithm?
How can you handle continuous or numerical variables in Decision Trees?
What are some methods for dealing with imbalanced datasets when using Decision Trees?
Can you explain the concept of feature importance in Decision Trees?
What are ensemble methods, and how can they be combined with Decision Trees?
What is the difference between Random Forests and Gradient Boosting algorithms?
How do you determine the optimal depth or size of a Decision Tree?
Can you explain the concept of information gain or impurity reduction in Decision Trees?
How do you evaluate the performance of a Decision Tree model?
Can Decision Trees handle multi-class classification problems?
How do you interpret the rules generated by a Decision Tree model?
Can Decision Trees handle missing values and outliers during the prediction phase?
How can Decision Trees be used for feature selection or variable importance ranking?
A Decision Tree is a supervised machine learning algorithm that can be used for classification and regression tasks. It takes a dataset as input and recursively partitions the data based on the values of input features to create a tree-like model. The tree structure consists of internal nodes representing features, branches representing decisions based on feature values, and leaf nodes representing the predicted output or class labels.
The construction of a Decision Tree involves selecting the best features to split the data at each internal node based on certain criteria, such as information gain or Gini impurity. The goal is to create a tree that maximizes the separation of classes or minimizes the variance within each class.
Decision Trees are easy to understand and interpret. The generated rules can be visualized and easily explained to stakeholders.
They can handle both categorical and numerical data without requiring extensive data preprocessing.
Decision Trees can handle nonlinear relationships between features and the target variable.
They can handle missing values by effectively utilizing available information for decision-making.
Decision Trees are robust to outliers and can still provide accurate predictions.
They can handle multi-class classification problems.
Decision Trees can be combined with ensemble methods to improve performance.
The commonly used splitting criteria in Decision Trees include:
Gini impurity: It measures the degree of impurity or the probability of incorrectly classifying a randomly chosen element in a dataset.
Information gain: It calculates the reduction in entropy (uncertainty) achieved by splitting the data based on a particular feature.
Gain ratio: It is similar to information gain but takes into account the intrinsic information of each feature.
These splitting criteria affect the construction of the tree by determining the order and quality of feature selection for splitting. The criterion with the highest value is chosen at each internal node to maximize the purity or information gain in the resulting subsets.
When handling missing values in a dataset for a Decision Tree:
Missing values can be treated as a separate category if the feature is categorical.
For numerical features, missing values can be replaced with the mean, median, or another appropriate measure.
Missing values can be imputed based on other correlated features or using advanced imputation techniques.
An additional "missing" category can be created if it is informative for the classification or regression task.
Overfitting occurs when a Decision Tree captures noise or irrelevant patterns from the training data, leading to poor generalization on unseen data. Signs of overfitting include overly complex trees with many branches and low accuracy on test data.
To address overfitting in Decision Trees:
Pruning techniques can be applied to reduce the size and complexity of the tree, such as cost complexity pruning or reduced error pruning.
Setting a maximum depth or minimum number of samples per leaf can limit the tree's growth.
Increasing the minimum number of samples required for splitting can prevent overfitting on small subsets.
Cross-validation can be used to evaluate different models and select the one with the best performance on unseen data.
Pruning is the process of reducing the size or complexity of a Decision Tree by removing specific branches or nodes. It is performed after the initial tree is constructed. Pruning is important because it helps prevent overfitting, improves the generalization ability of the tree, and enhances its interpretability.
By pruning, we can simplify the tree structure and remove branches that do not contribute significantly to the overall accuracy or predictive power of the model. Pruning aims to strike a balance between model complexity and performance on unseen data, ensuring that the tree captures essential patterns and avoids memorizing noise or outliers in the training data.
The commonly used measures to assess the quality of splits in Decision Trees are:
Gini impurity: It measures the probability of incorrectly classifying a randomly chosen element if it were randomly labeled according to the distribution of classes in the subset. Lower Gini impurity indicates a more pure split.
Information gain: It calculates the reduction in entropy (uncertainty) achieved by splitting the data based on a particular feature. Higher information gain indicates a more informative split.
Gain ratio: It is a modification of information gain that takes into account the intrinsic information of each feature. It considers the number of categories or levels in a categorical feature to address bias towards features with many levels.
These measures help the Decision Tree algorithm determine the optimal split at each node by selecting the feature that maximizes the purity or information gain in the resulting subsets.
To handle categorical variables in a Decision Tree algorithm:
One-Hot Encoding: Each category of a categorical variable is transformed into a binary column. For each instance, the column corresponding to its category is set to 1, while the rest are set to 0.
Label Encoding: Assign a unique numerical value to each category. The categorical variable is replaced with these numerical labels. However, this method should be used with caution, as it may introduce a false sense of order or magnitude in the data.
The choice between these encoding techniques depends on the nature of the categorical variable and the specific requirements of the problem at hand.
Decision Trees can handle continuous or numerical variables naturally. They determine the split points based on the values of the numerical variable. Here's how it works:
The Decision Tree algorithm searches for the best split point by evaluating different thresholds or ranges based on the numerical variable's values.
The split point is chosen based on a criterion such as Gini impurity or information gain, aiming to minimize impurity or maximize information gain in the resulting subsets.
Once the split point is determined, the tree branches into two child nodes based on whether the numerical variable's value is above or below the split point.
The process continues recursively on each branch until a stopping criterion is met (e.g., reaching a maximum depth or minimum number of samples per leaf).
When dealing with imbalanced datasets in Decision Trees, some methods to consider are:
Class weights: Assign different weights to the classes during the training process to give more importance to the minority class. This helps balance the impact of different classes on the tree construction.
Sampling techniques: Use techniques like undersampling the majority class or oversampling the minority class to create a more balanced dataset. This can be done by randomly selecting instances or generating synthetic samples.
Ensemble methods: Utilize ensemble methods like Random Forests or Gradient Boosting, which inherently handle imbalanced datasets by combining multiple decision trees.
Cost-sensitive learning: Assign different misclassification costs to different classes. This encourages the algorithm to focus more on correctly classifying instances from the minority class.
The choice of method depends on the specifics of the dataset and the problem at hand. It's often recommended to try multiple techniques and evaluate their performance.
Feature importance in Decision Trees refers to the measurement of the relative importance or relevance of each feature in the tree's decision-making process. It helps identify which features have the most significant impact on the target variable.
In Decision Trees, feature importance can be determined by considering how much each feature contributes to reducing impurity or improving the information gain at each split. Features that result in significant impurity reduction or information gain are considered more important.
The feature importance is calculated based on the number of instances or samples affected by the feature, the depth at which it appears in the tree, and the impurity reduction or information gain associated with its use in splits.
Feature importance can provide insights into the underlying patterns and relationships within the data, aiding in feature selection, understanding the predictive power of different features, and generating meaningful insights from the model.
Ensemble methods combine multiple individual models to create a more robust and accurate predictive model. In the context of Decision Trees, two popular ensemble methods are Random Forests and Gradient Boosting.
Random Forests: It combines a set of Decision Trees, each trained on a random subset of the data and a random subset of features. The final prediction is determined by aggregating the predictions of all trees, either by majority voting in classification tasks or averaging in regression tasks. Random Forests reduce overfitting, increase stability, and provide feature importance rankings.
Gradient Boosting: It builds an ensemble of Decision Trees sequentially, where each subsequent tree corrects the errors made by the previous trees. The trees are added in a gradient descent manner, minimizing a loss function. Gradient Boosting achieves high predictive accuracy and handles complex relationships in the data. Examples include XGBoost, LightGBM, and AdaBoost.
These ensemble methods improve the performance of Decision Trees by reducing bias, capturing diverse patterns in the data, and handling high-dimensional and complex problems.
The main differences between Random Forests and Gradient Boosting algorithms are:
Training Process: Random Forests train each tree independently using random subsets of the data and features, while Gradient Boosting builds trees sequentially, with each tree correcting the errors of the previous trees.
Sample and Feature Selection: Random Forests use bootstrap aggregating (bagging) to randomly select subsets of both samples and features at each tree construction. Gradient Boosting focuses on the samples, assigning different weights to each instance based on the errors made by the previous trees.
Voting Strategy: Random Forests combine predictions by majority voting (for classification) or averaging (for regression) the predictions from multiple trees. Gradient Boosting combines predictions by adding the outputs of the individual trees, sequentially minimizing a loss function.
Bias-Variance Tradeoff: Random Forests reduce variance by averaging multiple independent trees but may have higher bias. Gradient Boosting reduces bias by iteratively correcting errors but may have higher variance.
Feature Importance: Random Forests provide feature importance rankings based on the average impurity reduction across all trees. Gradient Boosting can also provide feature importance, typically based on the number of times a feature is selected for splitting.
Both algorithms are powerful ensemble methods that improve the performance of Decision Trees, but they have different underlying principles and training approaches.
Determining the optimal depth or size of a Decision Tree involves finding the right balance between model complexity and generalization ability. Here are some approaches to determine the optimal depth or size:
Maximum Depth: Set a maximum depth for the Decision Tree. This limits the number of levels or splits in the tree. A deeper tree can capture more complex relationships in the data but increases the risk of overfitting. Cross-validation or validation curves can be used to evaluate different depths and choose the one that maximizes performance on unseen data.
Minimum Number of Samples per Leaf: Specify a minimum number of samples required to create a leaf node. This prevents further splitting if the number of samples at a node is below the threshold. Setting a higher threshold can help prevent overfitting and create simpler trees.
Stopping Criteria: Define other stopping criteria such as minimum information gain, maximum number of leaf nodes, or maximum number of features. These criteria can help control the growth of the tree and prevent overfitting.
The optimal depth or size of the Decision Tree should be determined by evaluating the model's performance on a validation set or using cross-validation techniques.
Information gain and impurity reduction are concepts used in Decision Trees to determine the quality of a split based on a specific feature. They assess how well a feature separates the data into homogeneous subsets in terms of the target variable.
In classification tasks, the impurity or disorder of a set of instances is measured using metrics like Gini impurity or entropy. A node with low impurity means it contains instances predominantly belonging to a single class.
Information gain calculates the reduction in impurity achieved by splitting the data based on a particular feature. It measures how much information about the target variable is gained by including that feature in the split. Higher information gain indicates that the feature contributes more to the separation of classes or the prediction task.
Impurity reduction is the difference between the impurity of the current node and the weighted average impurity of the resulting child nodes after the split. The feature that results in the highest information gain or impurity reduction is selected as the best feature to split at each internal node of the Decision Tree.
The performance of a Decision Tree model can be evaluated using various metrics, depending on the task at hand (classification or regression). Here are some commonly used evaluation metrics:
Classification:
Accuracy: The proportion of correctly classified instances.
Precision: The ability to correctly identify positive instances.
Recall: The ability to correctly identify all positive instances.
F1 score: The harmonic mean of precision and recall.
Area Under the ROC Curve (AUC-ROC): Measures the model's ability to discriminate between classes.
Regression:
Mean Absolute Error (MAE): The average absolute difference between predicted and actual values.
Mean Squared Error (MSE): The average squared difference between predicted and actual values.
R-squared: Measures the proportion of variance in the target variable explained by the model.
To evaluate the performance, you can split the dataset into training and testing sets, or use techniques like cross-validation to obtain more reliable estimates of the model's performance. By comparing the model's predictions to the true values, you can assess its accuracy and generalization ability.
Yes, Decision Trees can handle multi-class classification problems. Decision Trees are inherently capable of handling both binary and multi-class classification tasks. At each internal node, the tree splits the data based on a feature, and at each leaf node, the majority class or the class with the highest frequency is assigned.
During training, the Decision Tree algorithm can handle multiple classes by using appropriate splitting criteria (e.g., Gini impurity or information gain) to find the most informative splits that separate the classes effectively.
The rules generated by a Decision Tree model can be interpreted by following the path from the root to a specific leaf node. Each node represents a condition or rule based on a feature, and the tree branches based on the outcomes of the conditions.
To interpret the rules, you can examine the feature conditions at each node and understand the decisions made by the tree. The rules can provide insights into the relationships between the features and the target variable. The depth and complexity of the tree can affect the interpretability, with simpler trees being easier to interpret.
For example, in a binary classification problem, a rule could be interpreted as "If feature A > 5 and feature B < 10, then predict class 1." By examining the rules, you can gain understanding about the decision-making process of the model and identify the important features that contribute to the predictions.
Decision Trees can handle missing values during the prediction phase. When encountering a missing value for a feature at a particular node, the tree can follow different branches based on the available features. The missing value is treated as a separate category or branch in the tree.
As for outliers, Decision Trees are relatively robust to outliers because they partition the feature space into regions based on splits, and outliers are likely to be isolated in their own leaf nodes. However, outliers can influence the tree's structure and decisions if they significantly affect the impurity or information gain.
Decision Trees can be used for feature selection or variable importance ranking based on their inherent ability to assess feature importance during the tree construction process. The importance of a feature can be measured using different criteria such as:
Mean Decrease Impurity: It calculates the total impurity reduction achieved by a feature over all splits in the tree. Features with higher impurity reduction are considered more important.
Mean Decrease Accuracy: It measures the drop in accuracy when a feature is randomly permuted, indicating the importance of the feature in maintaining the model's accuracy. Features with higher accuracy drop are considered more important.
Once the Decision Tree model is trained, the feature importance scores can be obtained. The importance scores can be normalized to ensure they sum up to 1 or scaled to a specific range for better interpretation.
Based on the feature importance scores, you can perform feature selection by choosing the top-ranked features. This helps in reducing the dimensionality of the data and selecting the most informative features for the predictive task.
Additionally, feature importance ranking can provide insights into the underlying relationships between features and the target variable. It helps in understanding which features have a significant impact on the predictions made by the model. This information can be valuable for further analysis, feature engineering, or model explanation.
x251# Import necessary libraries
2from sklearn.datasets import load_iris
3from sklearn.model_selection import train_test_split
4from sklearn.tree import DecisionTreeClassifier
5from sklearn.metrics import accuracy_score
6
7# Load the IRIS dataset
8iris = load_iris()
9
10# Split the dataset into training and testing sets
11X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.2, random_state=42)
12
13# Create a Decision Tree classifier
14clf = DecisionTreeClassifier()
15
16# Train the classifier on the training data
17clf.fit(X_train, y_train)
18
19# Make predictions on the testing data
20y_pred = clf.predict(X_test)
21
22# Evaluate the accuracy of the classifier
23accuracy = accuracy_score(y_test, y_pred)
24print("Accuracy:", accuracy)
In this code snippet, we first import the necessary libraries: load_iris
from sklearn.datasets
to load the IRIS dataset, train_test_split
from sklearn.model_selection
to split the dataset into training and testing sets, DecisionTreeClassifier
from sklearn.tree
to create a Decision Tree classifier, and accuracy_score
from sklearn.metrics
to evaluate the accuracy of the classifier.
Next, we load the IRIS dataset and split it into training and testing sets using the train_test_split
function. Then, we create an instance of the Decision Tree classifier and train it on the training data using the fit
method.
After training, we use the trained classifier to make predictions on the testing data with the predict
method. Finally, we evaluate the accuracy of the predictions by comparing them to the true labels and print the accuracy score.
Make sure to have scikit-learn installed (pip install scikit-learn
) before running the code.
DecisionTreeClassifier()
DecisionTreeClassifier
is a class in scikit-learn that implements the Decision Tree algorithm for classification tasks. It is a versatile and widely used machine learning algorithm known for its simplicity and interpretability.
Here is a detailed explanation of the DecisionTreeClassifier
class and its important parameters:
xxxxxxxxxx
11DecisionTreeClassifier(criterion='gini', splitter='best', max_depth=None,
2 min_samples_split=2, min_samples_leaf=1,
3 min_weight_fraction_leaf=0.0, max_features=None,
4 random_state=None, max_leaf_nodes=None,
5 min_impurity_decrease=0.0, min_impurity_split=None,
6 class_weight=None, presort='deprecated', ccp_alpha=0.0)
7
criterion
(default: 'gini'): The function to measure the quality of a split. It can be either 'gini' for the Gini impurity or 'entropy' for information gain. Gini impurity is the default as it tends to be faster and is commonly used.
splitter
(default: 'best'): The strategy used to choose the split at each node. It can be either 'best' to choose the best split based on the criterion or 'random' to choose the best random split.
max_depth
(default: None): The maximum depth of the tree. If None, the tree is grown until all leaves are pure or until all leaves contain less than min_samples_split
samples.
min_samples_split
(default: 2): The minimum number of samples required to split an internal node. A split is not allowed if the number of samples at a node is less than this value.
min_samples_leaf
(default: 1): The minimum number of samples required to be at a leaf node. A split is not allowed if it would result in a leaf node with fewer samples than this value.
min_weight_fraction_leaf
(default: 0.0): The minimum weighted fraction of the sum total of weights required to be at a leaf node.
max_features
(default: None): The number of features to consider when looking for the best split. If None, all features are considered. It can be an integer (the number of features) or a fraction (a percentage of features).
random_state
(default: None): The seed of the random number generator used to select a random feature to consider at each split. It ensures reproducibility of the results.
max_leaf_nodes
(default: None): The maximum number of leaf nodes in the tree. If None, unlimited leaf nodes are allowed.
min_impurity_decrease
(default: 0.0): A node will be split if this split induces a decrease of the impurity greater than or equal to this value.
min_impurity_split
(default: None): This parameter is deprecated and will be removed in future versions.
class_weight
(default: None): Weights associated with classes. It can be a dictionary of the form {class_label: weight}
or 'balanced' to automatically adjust weights based on the class frequencies.
presort
(default: 'deprecated'): This parameter is deprecated and will be removed in future versions.
ccp_alpha
(default: 0.0): Complexity parameter used for Minimal Cost-Complexity Pruning. It limits the size of the tree by controlling the trade-off between complexity and accuracy.
1class DecisionTree:
2 def __init__(self):
3 self.tree = None # 存储决策树的根节点
4
5 def fit(self, X, y):
6 self.tree = self._grow_tree(X, y) # 训练决策树
7
8 def _grow_tree(self, X, y):
9 if len(np.unique(y)) == 1: # 如果所有样本属于同一类别,创建叶节点并返回该类别
10 return np.unique(y)[0]
11
12 feature_index, threshold = self._find_best_split(X, y) # 找到最佳的分割特征和阈值
13 if feature_index is None or threshold is None: # 如果无法找到最佳分割点,创建叶节点并返回样本中最多的类别
14 return np.bincount(y).argmax()
15
16 left_indices = X[:, feature_index] <= threshold # 根据最佳分割点将数据集划分为左子集和右子集
17 right_indices = X[:, feature_index] > threshold
18 X_left, y_left = X[left_indices], y[left_indices] # 左子集的特征和标签
19 X_right, y_right = X[right_indices], y[right_indices] # 右子集的特征和标签
20
21 left_subtree = self._grow_tree(X_left, y_left) # 递归构建左子树
22 right_subtree = self._grow_tree(X_right, y_right) # 递归构建右子树
23
24 return (feature_index, threshold, left_subtree, right_subtree) # 返回当前节点的分割特征、阈值以及左右子树的引用
25
26 def _find_best_split(self, X, y):
27 best_gain = 0 # 用于记录最佳信息增益
28 best_feature_index = None # 用于记录最佳分割特征的索引
29 best_threshold = None # 用于记录最佳分割阈值
30
31 n_features = X.shape[1] # 特征的数量
32 for feature_index in range(n_features): # 遍历所有特征
33 unique_values = np.unique(X[:, feature_index]) # 当前特征的唯一值
34 thresholds = (unique_values[:-1] + unique_values[1:]) / 2 # 计算所有可能的分割阈值
35
36 for threshold in thresholds: # 遍历所有分割阈值
37 left_indices = X[:, feature_index] <= threshold # 根据分割阈值划分数据集
38 right_indices = X[:, feature_index] > threshold
39
40 left_entropy = self._calculate_entropy(y[left_indices]) # 计算左子集的熵
41 right_entropy = self._calculate_entropy(y[right_indices]) # 计算右子集的熵
42 weighted_avg_entropy = (left_entropy * len(left_indices) + right_entropy * len(right_indices)) / len(y) # 计算加权平均熵
43
44 information_gain = self._information_gain = self._calculate_entropy(y) - weighted_avg_entropy # 计算信息增益
45
46 if information_gain > best_gain: # 如果当前信息增益大于最佳信息增益,则更新最佳信息增益和相应的分割特征及阈值
47 best_gain = information_gain
48 best_feature_index = feature_index
49 best_threshold = threshold
50
51 return best_feature_index, best_threshold # 返回最佳分割特征和阈值
52
53 def _calculate_entropy(self, y):
54 unique_classes, class_counts = np.unique(y, return_counts=True) # 统计每个类别的数量
55 probabilities = class_counts / len(y) # 计算每个类别的概率
56 entropy = -np.sum(probabilities * np.log2(probabilities + 1e-10)) # 计算熵
57 return entropy
58
59 def predict(self, X):
60 return np.array([self._traverse_tree(x, self.tree) for x in X]) # 对测试样本进行预测
61
62 def _traverse_tree(self, x, node):
63 if isinstance(node, np.int64): # 如果当前节点是叶节点,直接返回节点的类别
64 return node
65
66 feature_index, threshold, left_subtree, right_subtree = node # 获取当前节点的分割特征、阈值以及左右子树
67
68 if x[feature_index] <= threshold: # 根据分割特征和阈值决定向左子树还是向右子树遍历
69 return self._traverse_tree(x, left_subtree)
70 else:
71 return self._traverse_tree(x, right_subtree)
72
xxxxxxxxxx
1231import numpy as np
2from sklearn.datasets import load_iris
3from sklearn.model_selection import train_test_split
4
5# Load the IRIS dataset
6iris = load_iris()
7X, y = iris.data, iris.target
8
9# Split the data into training and testing sets
10X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
11
12# Create an instance of the DecisionTree classifier
13clf = DecisionTree()
14
15# Fit the classifier to the training data
16clf.fit(X_train, y_train)
17
18# Make predictions on the testing data
19y_pred = clf.predict(X_test)
20
21# Evaluate the accuracy of the classifier
22accuracy = np.mean(y_pred == y_test)
23print("Accuracy:", accuracy)
24
25from sklearn.metrics import confusion_matrix
26import matplotlib.pyplot as plt
27import seaborn as sns
28# Compute the confusion matrix
29cm = confusion_matrix(y_test, y_pred)
30
31# Plot the confusion matrix as a heatmap
32sns.heatmap(cm, annot=True, cmap='Blues', fmt='d', xticklabels=iris.target_names, yticklabels=iris.target_names)
33
34# Add labels and title
35plt.xlabel('Predicted')
36plt.ylabel('True')
37plt.title('Confusion Matrix')
38
39# Display the heatmap
40plt.show()