How can Tensorflow be used to add dense layers on top using Python?

TensorFlow with Keras Sequential API allows you to add dense layers on top of convolutional layers for classification tasks. Dense layers require 1D input, so we first flatten the 3D convolutional output before adding fully connected layers.

Read More: What is TensorFlow and how Keras work with TensorFlow to create Neural Networks?

Complete CNN Model with Dense Layers

Here's a complete example showing how to build a CNN model and add dense layers on top:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

# Create sequential model
model = keras.Sequential()

# Add convolutional layers
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))

print("Adding dense layer on top")
# Flatten 3D output to 1D
model.add(layers.Flatten())

# Add dense layers
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10))  # 10 classes for CIFAR-10

print("Complete architecture of the model")
model.summary()

The output shows the complete model architecture:

Adding dense layer on top
Complete architecture of the model
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 30, 30, 32)       896       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 15, 15, 32)       0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 13, 13, 64)       18496     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 6, 6, 64)         0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 4, 4, 64)         36928     
_________________________________________________________________
flatten (Flatten)            (None, 1024)             0         
_________________________________________________________________
dense (Dense)                (None, 64)               65600     
_________________________________________________________________
dense_1 (Dense)              (None, 10)               650       
=================================================================
Total params: 122,570
Trainable params: 122,570
Non-trainable params: 0
_________________________________________________________________

How Dense Layers Work

The process involves these key steps:

Conv2D (4,4,64) Flatten (1024,) Dense (64,) Dense (10,) 3D Feature Maps 1D Vector Hidden Layer Output Classes Dense Layer Architecture

Key Components

  • Flatten Layer ? Converts 3D tensor (4, 4, 64) into 1D vector (1024)
  • First Dense Layer ? 64 neurons with ReLU activation for feature learning
  • Output Dense Layer ? 10 neurons for CIFAR-10 classification (no activation for logits)
  • Parameter Count ? Dense layers contribute most parameters (65,600 + 650)

Why Flatten Before Dense Layers?

Dense layers expect 1D input vectors, but convolutional layers output 3D tensors. The Flatten layer reshapes the (4, 4, 64) tensor into a (1024,) vector without losing information, enabling the dense layers to process all spatial features.

Conclusion

Adding dense layers on top of convolutional layers requires flattening the 3D output first. Use multiple dense layers with appropriate activations for effective classification in CNN architectures.

Updated on: 2026-03-25T16:10:23+05:30

935 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements