Explain how L1 Normalization can be implemented using scikit-learn library in Python?

The process of converting a range of values into standardized range of values is known as normalization. These values could be between -1 to +1 or 0 to 1. Data can be normalized with the help of subtraction and division as well.

Data fed to the learning algorithm as input should remain consistent and structured. All features of the input data should be on a single scale to effectively predict the values. But in real-world, data is unstructured, and most of the times, not on the same scale.

This is when normalization comes into picture. It is one of the most important data-preparation process that helps in changing values of the columns of the input dataset to fall on a same scale. During the process of normalization, the range of values are ensured to be non-distorted.

Note − Not all input dataset fed to machine learning algorithms have to be normalized. Normalization is required only when features in a dataset have completely different scale of values.

Types of Normalization

There are different kinds of normalization −

  • Min-Max normalization
  • Z Normalization
  • Unit Vector Normalization (L1 and L2)

What is L1 Normalization?

L1 normalization, also known as Least Absolute Deviations, transforms the data such that the sum of the absolute values equals 1 in every row. The formula for L1 normalization is:

normalized_value = value / sum(|all_values_in_row|)

Implementation using Scikit-learn

Let us see how L1 Normalization can be implemented using scikit-learn in Python ?

import numpy as np
from sklearn import preprocessing

input_data = np.array([
    [34.78, 31.9, -65.5],
    [-16.5, 2.45, -83.5],
    [0.5, -87.98, 45.62],
    [5.9, 2.38, -55.82]
])

data_normalized_l1 = preprocessing.normalize(input_data, norm='l1')
print("L1 normalized data is \n", data_normalized_l1)
L1 normalized data is 
 [[ 0.26312604  0.24133757 -0.49553639]
 [-0.16105417  0.0239141  -0.81503172]
 [ 0.00372856 -0.65607755  0.34019389]
 [ 0.09204368  0.03712949 -0.87082683]]

Verifying L1 Normalization

Let's verify that the sum of absolute values in each row equals 1 ?

import numpy as np
from sklearn import preprocessing

input_data = np.array([
    [34.78, 31.9, -65.5],
    [-16.5, 2.45, -83.5],
    [0.5, -87.98, 45.62],
    [5.9, 2.38, -55.82]
])

data_normalized_l1 = preprocessing.normalize(input_data, norm='l1')

# Verify L1 normalization
for i, row in enumerate(data_normalized_l1):
    sum_absolute = np.sum(np.abs(row))
    print(f"Row {i+1} sum of absolute values: {sum_absolute:.6f}")
Row 1 sum of absolute values: 1.000000
Row 2 sum of absolute values: 1.000000
Row 3 sum of absolute values: 1.000000
Row 4 sum of absolute values: 1.000000

Key Points

  • The preprocessing.normalize() function with norm='l1' performs L1 normalization

  • Each row is normalized independently

  • The sum of absolute values in each normalized row equals 1

  • L1 normalization preserves the relative proportions of features within each sample

  • Negative values are preserved but scaled proportionally

Conclusion

L1 normalization using scikit-learn's preprocessing.normalize() ensures that each row's absolute values sum to 1, making it useful for scenarios where you want to preserve relative feature proportions while standardizing the scale.

Updated on: 2026-03-25T13:21:32+05:30

6K+ Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements