Robust Regression for Machine Learning in Python

In machine learning, regression analysis is a crucial tool for predicting continuous numerical outcomes based on input variables. Traditional regression techniques assume that the data follows a normal distribution and lacks outliers. However, real−world datasets often deviate from these assumptions, resulting in unreliable predictions. To combat this challenge, robust regression methods have been developed to offer more accurate and dependable results, even in the presence of outliers. This article delves into robust regression and explores how to implement these techniques using Python, one of the most popular programming languages for machine learning. By understanding robust regression and its implementation in Python, you can enhance the reliability and performance of your machine−learning models.

What is Robust Regression?

Robust regression is a variation of traditional regression analysis that is less sensitive to outliers in the data. Outliers are data points that deviate significantly from the majority of the data points, and they can have a substantial impact on the regression model's performance. Traditional regression methods, such as ordinary least squares (OLS), treat all data points equally, regardless of their distance from the central cluster. This makes them highly influenced by outliers, resulting in biased parameter estimates and poor predictive performance.

Robust regression techniques, on the other hand, aim to down-weight the impact of outliers by assigning lower weights to these data points during the model fitting process. By giving less weight to outliers, robust regression models can provide more accurate parameter estimates and better predictions.

The Importance of Robust Regression

Robust regression methods provide a solution to the challenges posed by outliers in traditional regression analysis. These methods adjust the model fitting process to down−weight the influence of outliers, thereby reducing their impact on the estimated regression coefficients. By giving less weight to outliers, robust regression models can provide more reliable parameter estimates and improve the overall predictive performance.

Robust regression methods achieve robustness by employing different weighting schemes or by using robust estimation techniques. Instead of minimizing the sum of squared residuals, robust regression focuses on minimizing other objective functions that are less sensitive to outliers. By doing so, these methods provide more accurate estimates of the underlying relationships between the predictor variables and the target variable.

Types of Robust Regression Methods

Several robust regression methods have been developed over the years. Let's discuss a few commonly used ones:

  • Huber Regression

    Huber regression is a robust regression method that combines the advantages of both least squares regression and absolute deviation regression. It minimizes the sum of squared residuals for data points close to the regression line while minimizing the absolute residuals for data points that deviate significantly from the line. This way, it strikes a balance between the two and provides robust parameter estimates.

  • Theil−Sen Regression

    Theil−Sen regression is a non−parametric robust regression method that estimates the slope of the regression line by considering all possible pairs of points. It calculates the median of the slopes of the lines connecting each pair of points and provides a robust estimate of the overall slope. The Theil−Sen method is computationally efficient and provides robust estimates even when up to 29% of the data points are outliers.

  • RANSAC (RANdom SAmple Consensus)

    RANSAC is an iterative robust regression method that randomly selects a subset of data points, fits a regression model to these points, and then calculates the number of inliers (data points that are consistent with the model) and outliers (data points that deviate from the model). It repeats this process for a certain number of iterations, selecting the model with the highest number of inliers as the final robust regression model.

Implementing Robust Regression in Python

Python offers numerous libraries that provide reliable regression methods. A well−known library for this purpose is statsmodels, renowned for its extensive statistical modeling capabilities, including the implementation of robust regression. To showcase the application of robust regression, let's explore an example utilizing the Boston Housing dataset.

We must import the required libraries first.

import pandas as pd
import numpy as np
import statsmodels.api as sm

Next, we load the Boston Housing dataset:

from sklearn.datasets import load_boston
boston = load_boston()
df = pd.DataFrame(, columns=boston.feature_names)
df['MEDV'] =

Now, we can fit a robust regression model using the RLM (Robust Linear Model) class from statsmodels:

X = df.drop('MEDV', axis=1)
y = df['MEDV']

# The predictor variables should include a constant term.
X = sm.add_constant(X)

# Fit the robust regression model
robust_model = sm.RLM(y, X, M=sm.robust.norms.HuberT())
robust_results =

In the provided code snippet, an initial step involves segregating the predictor variables (X) from the target variable (y). To accommodate the intercept term in the regression equation, a constant term is subsequently appended to the predictor variables. Finally, a robust regression model is constructed utilizing the RLM class and employing the HuberT norm, a prevalent robust estimation method.

Once the model is fitted, we can obtain the parameter estimates and other statistical information:


The summary() function provides a comprehensive summary of the model, including the parameter estimates, standard errors, t−values, and p−values. It also displays diagnostic information, such as the number of iterations performed during model fitting and the convergence status.

Benefits of Robust Regression

  • When it comes to handling data containing outliers or violating the assumptions of normality, robust regression techniques offer many advantages over traditional regression methods. This article will delve into the key benefits of employing robust regression, highlighting how it outshines its counterpart in dealing with challenging data scenarios. Let's explore the advantages of utilizing robust regression techniques:

  • Increased robustness: Robust regression methods are designed to handle outliers and influential observations, providing more reliable estimates of the model parameters. This makes the models less sensitive to extreme observations and improves the overall predictive performance.

  • Better model interpretation: By down−weighting the influence of outliers, robust regression models provide parameter estimates that are more representative of the majority of the data. This enhances the interpretability of the models, as the estimated coefficients reflect the relationship between the predictor variables and the target variable in the absence of extreme observations.

  • Versatility: Robust regression techniques can be applied to a wide range of regression problems, including simple linear regression, multiple linear regression, and nonlinear regression. This makes them suitable for various applications in fields such as economics, finance, social sciences, and engineering.

  • Easy implementation: Robust regression methods can be implemented using readily available libraries in popular programming languages like Python. This simplifies the adoption and integration of robust regression into existing machine−learning workflows.


Robust regression is a valuable technique for improving the reliability and accuracy of machine learning models when the data contains outliers or violates the assumptions of traditional regression methods. By down−weighting the influence of extreme observations, robust regression provides more robust parameter estimates and better predictive performance. Python, with libraries like statsmodels, offers convenient tools to implement robust regression models. By incorporating robust regression techniques into your machine learning workflows, you can build more reliable and accurate predictive models for a wide range of applications.

Updated on: 25-Jul-2023


Kickstart Your Career

Get certified by completing the course

Get Started