How can Tensorflow be used to predict values on the new data for the augmented data?

Once training is done, the model built can be used with new data which is augmented. This can be done using the ‘predict’ method. The data that needs to be validated with, is first loaded into the environment. Then, it is pre-processed, by converting it from an image to an array. Next, the predict method is called on this array.

Read More: What is TensorFlow and how Keras work with TensorFlow to create Neural Networks?

We will use the Keras Sequential API, which is helpful in building a sequential model that is used to work with a plain stack of layers, where every layer has exactly one input tensor and one output tensor.

A neural network that contains at least one layer is known as a convolutional layer. We can use the Convolutional Neural Network to build learning model. 

An image classifier is created using a keras.Sequential model, and data is loaded using preprocessing.image_dataset_from_directory. Data is efficiently loaded off disk. Overfitting is identified and techniques are applied to mitigate it. These techniques include data augmentation, and dropout. There are images of 3700 flowers. This dataset contains 5 sub directories, and there is one sub directory per class. They are:

daisy, dandelion, roses, sunflowers, and tulips.

We are using the Google Colaboratory to run the below code. Google Colab or Colaboratory helps run Python code over the browser and requires zero configuration and free access to GPUs (Graphical Processing Units). Colaboratory has been built on top of Jupyter Notebook.

When the number of training examples is small, the model learns from noises or unwanted details from training examples. This negatively impacts the performance of the model on new examples.

Due to overfitting, the model will not be able to generalize well on the new dataset. There are many ways in which overfitting can be avoided. We can use drop out technique to overcome overfitting. Overfitting can be reduced by introducing dropout in the network. This is considered as a form of regularization. This helps expose the model to more aspects of the data, thereby helping the model generalize better.

When dropout is applied to a layer, it randomly drops out a number of output units from the layer when the training is going on. This is done by setting the activation function to 0. Dropout technique takes a fractional number as the input value (like 0.1, 0.2, 0.4, and so on). This number 0.1 or 0.2 basically indicates that 10 percent or 20 percent of the output units are randomly from the applied layer.

Data augmentation generates additional training data from the existing examples by augmenting them with the help of random transformations that would yield believable-looking images. Following is an example:


print("The model built is being used to predict new data")
sunflower_url = ""
sunflower_path = tf.keras.utils.get_file('Red_sunflower', origin=sunflower_url)
img = keras.preprocessing.image.load_img(
   sunflower_path, target_size=(img_height, img_width)
img_array = keras.preprocessing.image.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create a batch
predictions = model.predict(img_array)
score = tf.nn.softmax(predictions[0])
   "This image likely belongs to {} with {:.2f} percent confidence."
   .format(class_names[np.argmax(score)], 100 * np.max(score))

Code credit −


The model built is being used to predict new data
Downloading data from
122880/117948 [===============================] - 0s 0us/step
A batch is created
This image likely belongs to sunflowers with 99.07 percent confidence.


  • The model that as built previously is used on never before seen data.
  • The relevant values for the dataset are predicted using the ‘predict’ method.
  • Once the predictions are made, the confidence level is calculated.
  • This is displayed on the console.

Updated on: 22-Feb-2021


Kickstart Your Career

Get certified by completing the course

Get Started