Separating Planes In SVM

Support Vector Machine (SVM) is a supervised algorithm used widely in handwriting recognition, sentiment analysis and many more. To separate different classes, SVM calculates the optimal hyperplane that more or less accurately creates a margin between the two classes.

Here are a few ways to optimize hyperplanes in SVM ?

  • Data Processing ? SVM requires data that is normalised, scaled and centred since they are sensitive to such features.

  • Choose a Kernel ? A kernel function is used to transform the input into a higher dimensional space. Some of them include linear, polynomial and radial base functions.

Let us consider two cases where SVM hyperplanes can be differentiated ?

  • Linearly separable case ? Classes can be separated by a straight line.

  • Nonlinearly separable case ? Classes require complex boundaries.

Linearly Separable Case

For the linearly separable case, let us consider the iris dataset with 2dimensional features. A linearly separable case is when the features can be separated by a hyperplane linearly. The iris dataset is a good beginnerfriendly way to depict hyperplanes that are linearly separable.

Algorithm

  • Import all the libraries

  • Load the Iris dataset and assign the data and target features to variables x and y respectively

  • Split data using train_test_split function

  • Build the SVM model with a linear kernel and fit the model

  • Predict the labels and calculate accuracy

  • Extract hyperplane parameters and plot the decision boundary

from sklearn import datasets
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split as tts
from sklearn.metrics import accuracy_score
import matplotlib.pyplot as plt
import numpy as np

# Load iris dataset
iris = datasets.load_iris()
# Use only first 2 features for 2D visualization
x = iris.data[:, :2]
y = iris.target

# Split into train and test sets
x_train, x_test, y_train, y_test = tts(x, y, test_size=0.3, random_state=10)

# Build SVM model with linear kernel
clf = SVC(kernel='linear')
clf.fit(x_train, y_train)

# Predict and calculate accuracy
y_pred = clf.predict(x_test)
acc = accuracy_score(y_test, y_pred)
print("Accuracy:", acc)

# Get hyperplane parameters
w = clf.coef_[0]
b = clf.intercept_[0]

# Calculate slope and intercept for plotting
slope = -w[0] / w[1]
y_int = -b / w[1]

# Plot dataset and hyperplane
plt.scatter(x[:, 0], x[:, 1], c=y, cmap='viridis')
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = y_int + slope * x_vals
plt.plot(x_vals, y_vals, '--', color='red', linewidth=2, label='Decision Boundary')
plt.xlabel('Sepal Length')
plt.ylabel('Sepal Width')
plt.title('Linear SVM Decision Boundary')
plt.legend()
plt.show()

The output shows the accuracy and visualization ?

Accuracy: 0.8222222222222222

NonLinearly Separable Case

Consider an example where the cases are not linearly separable. In this case, we use the make_moons dataset available in scikitlearn. The make_moons dataset creates crescentshaped data that cannot be separated by a straight line.

Visualizing the Dataset

import matplotlib.pyplot as plt
from sklearn.datasets import make_moons

# Generate make_moons dataset with 100 samples
X, y = make_moons(n_samples=100, noise=0.15, random_state=42)

# Plot the dataset
plt.scatter(X[:, 0], X[:, 1], c=y, cmap='RdBu')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.title('Make_moons Dataset')
plt.show()

Creating NonLinear Decision Boundary

import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
from sklearn.svm import SVC

# Generate dataset
X, y = make_moons(n_samples=100, noise=0.15, random_state=42)

# Create SVM classifier with RBF kernel
clf = SVC(kernel='rbf', gamma=2)
clf.fit(X, y)

# Create meshgrid for decision boundary
x_min, x_max = X[:, 0].min() - 0.1, X[:, 0].max() + 0.1
y_min, y_max = X[:, 1].min() - 0.1, X[:, 1].max() + 0.1
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 500), 
                     np.linspace(y_min, y_max, 500))

# Calculate decision function values
z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
z = z.reshape(xx.shape)

# Create contour plot of decision boundary
plt.contourf(xx, yy, z, levels=50, alpha=0.8, cmap='RdBu')
plt.scatter(X[:, 0], X[:, 1], c=y, cmap='RdBu', edgecolors='black')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.title('SVM with RBF Kernel Decision Boundary')
plt.colorbar(label='Decision Function')
plt.show()

Comparison

Aspect Linear Kernel RBF Kernel
Best For Linearly separable data Nonlinear patterns
Complexity Low Higher
Speed Faster Slower
Overfitting Risk Lower Higher (with high gamma)

Key Parameters

  • Kernel ? Determines the type of decision boundary (linear, polynomial, RBF)

  • Gamma ? Controls the influence of individual training examples (RBF kernel)

  • C ? Regularization parameter controlling the tradeoff between smooth decision boundary and classifying training points correctly

Conclusion

SVM separates classes using optimal hyperplanes. Linear kernels work for linearly separable data, while RBF kernels handle complex nonlinear patterns. Choose the kernel based on your data's separability and complexity requirements.

Updated on: 2026-03-27T11:23:40+05:30

263 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements