How can transfer learning be implemented in Python using Keras?

Transfer learning is a powerful technique where a pre-trained model is adapted for a new but related task. In Keras, this involves loading a pre-trained model, freezing some layers, and fine-tuning others on your specific dataset.

Keras is a high-level deep learning API written in Python that runs on top of TensorFlow. It provides essential abstractions and building blocks for developing machine learning solutions quickly and efficiently.

What is Transfer Learning?

Transfer learning involves taking a model trained on one task and adapting it for a related task. Instead of training from scratch, you leverage pre-learned features, which saves time and computational resources.

Basic Transfer Learning Implementation

Here's how to implement transfer learning by freezing layers and training only the final layer ?

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np

# Create sample data
X_train = np.random.random((100, 784))
y_train = np.random.randint(0, 10, (100,))

# Build a sequential model
model = keras.Sequential([
    keras.Input(shape=(784,)),
    layers.Dense(32, activation='relu', name='dense1'),
    layers.Dense(32, activation='relu', name='dense2'),
    layers.Dense(32, activation='relu', name='dense3'),
    layers.Dense(10, activation='softmax', name='output')
])

print("Model created successfully")
print(f"Total layers: {len(model.layers)}")
Model created successfully
Total layers: 4

Freezing Layers for Transfer Learning

To implement transfer learning, freeze all layers except the last one ?

# Freeze all layers except the last one
print("Freezing layers for transfer learning...")
for layer in model.layers[:-1]:
    layer.trainable = False
    print(f"Frozen layer: {layer.name}")

# Keep the last layer trainable
model.layers[-1].trainable = True
print(f"Trainable layer: {model.layers[-1].name}")

# Check trainable parameters
trainable_count = sum([tf.keras.utils.count_params(layer.weights) for layer in model.layers if layer.trainable])
print(f"Trainable parameters: {trainable_count}")
Freezing layers for transfer learning...
Frozen layer: dense1
Frozen layer: dense2
Frozen layer: dense3
Trainable layer: output
Trainable parameters: 330

Complete Transfer Learning Example

Here's a complete example showing the transfer learning workflow ?

import tensorflow as tf
from tensorflow.keras.applications import VGG16
from tensorflow.keras import layers, Model

# Load pre-trained VGG16 model (without top classification layer)
base_model = VGG16(weights='imagenet', 
                   include_top=False, 
                   input_shape=(224, 224, 3))

# Freeze the base model
base_model.trainable = False

# Add custom classification layers
model = tf.keras.Sequential([
    base_model,
    layers.GlobalAveragePooling2D(),
    layers.Dense(128, activation='relu'),
    layers.Dropout(0.2),
    layers.Dense(10, activation='softmax')  # 10 classes
])

# Compile the model
model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

print("Transfer learning model ready")
print(f"Trainable parameters: {model.count_params()}")
Transfer learning model ready
Trainable parameters: 1281290

Key Steps in Transfer Learning

Step Description Purpose
Load Pre-trained Model Load model with learned weights Leverage existing knowledge
Freeze Layers Set trainable=False Preserve pre-learned features
Add Custom Layers Add task-specific layers Adapt to new problem
Fine-tune Train on new dataset Learn task-specific patterns

Fine-tuning Strategy

For better results, you can unfreeze some top layers after initial training ?

# After initial training, unfreeze top layers for fine-tuning
base_model.trainable = True

# Fine-tune from this layer onwards
fine_tune_at = 100

# Freeze all layers before fine_tune_at
for layer in base_model.layers[:fine_tune_at]:
    layer.trainable = False

# Use lower learning rate for fine-tuning
model.compile(optimizer=tf.keras.optimizers.Adam(1e-5/10),
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

print(f"Fine-tuning from layer {fine_tune_at}")
print(f"Trainable layers: {len([l for l in base_model.layers if l.trainable])}")
Fine-tuning from layer 100
Trainable layers: 55

Conclusion

Transfer learning in Keras involves loading pre-trained models, freezing layers to preserve learned features, and adding custom layers for your specific task. This approach significantly reduces training time and often improves performance on smaller datasets.

Updated on: 2026-03-25T14:45:46+05:30

395 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements