What insights or implications can be inferred from these images regarding the observed computational behavior?
In the context of a specific binary classification problem, I have generated decision boundary plots for a Random Forest Model utilizing a full feature dataset comprising 34 input features and a target column, encompassing 7428 records. Subsequently, I applied the “Information Gain” feature selection technique, resulting in a reduced dataset containing 25 input features and a target column, yet maintaining the same number of records (7428). Upon applying the identical Random Forest model with consistent hyperparameter values via GridSearchCV to both datasets, I observed an unexpected increase in computation time when processing the reduced dataset, despite its fewer features.
For seeking clarity, I decided to visualize the decision boundaries for both datasets, and the attached images represent these plots (with few columns).
Remaining images are not having complex boundaries.
Sample code to plot is:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
# Load dataset
df = pd.read_excel("curated_data.xlsx")
# Split the data into features (X) and target variable (y)
X = df.drop(columns=['Pathogen Test Result'])
y = df['Pathogen Test Result']
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Verify the number of features in X_train and X_test
print("Number of features in X_train:", X_train.shape[1])
print("Number of features in X_test:", X_test.shape[1])
# Train a Random Forest classifier
clf = RandomForestClassifier(n_estimators=200, max_depth=None, max_features = 'log2', min_samples_leaf = 1, min_samples_split = 2, random_state=42)
clf.fit(X_train, y_train)
# Plot decision boundaries
plt.figure(figsize=(8, 6))
# Define the mesh grid
x_min, x_max = X_train.iloc[:, 0].min() - 1, X_train.iloc[:, 0].max() + 1
y_min, y_max = X_train.iloc[:, 1].min() - 1, X_train.iloc[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1), np.arange(y_min, y_max, 0.1))
# Create a mesh grid with all features
mesh_data = np.column_stack((xx.ravel(), yy.ravel())) # Assuming only first two features are used for plotting
# Extend mesh grid with zeros to match the number of features expected by the classifier
for i in range(2, 34):
mesh_data = np.column_stack((mesh_data, np.zeros_like(xx.ravel())))
# Predict on the mesh grid
Z = clf.predict(mesh_data)
Z = Z.reshape(xx.shape)
# Plot the decision boundaries
plt.contourf(xx, yy, Z, alpha=0.8)
for i in df1.columns:
# Plot the data points
sns.scatterplot(x=df1[i], y=df1['Pathogen Test Result'], hue=y_train, palette='Set1', edgecolor='k', alpha=0.7)
plt.title('Decision Boundaries of Random Forest Classifier on Full Feature Set')
plt.xlabel(i)
plt.ylabel('Pathogen Test Result')
plt.legend(loc='best')
plt.show()