How can Tensorflow be used with flower dataset to continue training the model?

To continue training a TensorFlow model on the flower dataset, we use the fit() method which trains the model for a specified number of epochs. The flowers dataset contains thousands of flower images organized into 5 subdirectories, one for each class.

We are using Google Colaboratory to run the code. Google Colab provides free access to GPUs and requires zero configuration, making it ideal for machine learning projects.

Prerequisites

Before continuing training, ensure you have already loaded and preprocessed the flower dataset using tf.data.Dataset and created your model architecture. The following assumes you have train_ds, val_ds, and model already prepared.

Continuing Model Training

Use the fit() method to continue training your model with the prepared datasets ?

import tensorflow as tf

print("The data is fit to the model")
history = model.fit(
    train_ds,
    validation_data=val_ds,
    epochs=3
)

The output shows the training progress for each epoch ?

The data is fit to the model
Epoch 1/3
92/92 [==============================] - 102s 1s/step - loss: 0.7615 - accuracy: 0.7146 - val_loss: 0.7673 - val_accuracy: 0.7180
Epoch 2/3
92/92 [==============================] - 95s 1s/step - loss: 0.5864 - accuracy: 0.7786 - val_loss: 0.6814 - val_accuracy: 0.7629
Epoch 3/3
92/92 [==============================] - 95s 1s/step - loss: 0.4180 - accuracy: 0.8478 - val_loss: 0.7040 - val_accuracy: 0.7575
<tensorflow.python.keras.callbacks.History at 0x7fda872ea940>

Understanding the Training Output

The training output provides key metrics for each epoch:

  • loss − Training loss decreases from 0.7615 to 0.4180
  • accuracy − Training accuracy improves from 71.46% to 84.78%
  • val_loss − Validation loss fluctuates around 0.7
  • val_accuracy − Validation accuracy reaches 75.75%

Key Parameters

Parameter Description Purpose
train_ds Training dataset Data used to update model weights
validation_data Validation dataset Data used to evaluate model performance
epochs Number of training iterations Controls how many times the model sees the data

Training Considerations

When continuing model training, consider these important factors:

  • Overfitting − Monitor if validation loss increases while training loss decreases
  • Learning Rate − May need adjustment for continued training
  • Early Stopping − Use callbacks to prevent overtraining
  • Checkpoints − Save model weights periodically during training

Conclusion

The fit() method successfully continues training the model on the flower dataset, showing improvement in training accuracy from 71% to 85% over 3 epochs. Monitor both training and validation metrics to ensure the model is learning effectively without overfitting.

Updated on: 2026-03-25T16:02:49+05:30

206 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements