How can Tensorflow configure the flower dataset for performance?

PythonServer Side ProgrammingProgrammingTensorflow

The flower dataset would have given a certain percentage of accuracy when a model is created. If it is required to configure the model for performance, the buffer prefetch is used along with the Rescaling layer. This layer is applied using the Keras model, on the dataset, by making the rescaling layer a part of the Keras model.

Read More: What is TensorFlow and how Keras work with TensorFlow to create Neural Networks?

We will be using the flowers dataset, which contains images of several thousands of flowers. It contains 5 sub-directories, and there is one sub-directory for every class.

We are using the Google Colaboratory to run the below code. Google Colab or Colaboratory helps run Python code over the browser and requires zero configuration and free access to GPUs (Graphical Processing Units). Colaboratory has been built on top of Jupyter Notebook.

AUTOTUNE = tf.data.AUTOTUNE

train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)

num_classes = 5
print("A sequential model is built")
model = tf.keras.Sequential([
   layers.experimental.preprocessing.Rescaling(1./255),
   layers.Conv2D(32, 3, activation='relu'),
   layers.MaxPooling2D(),
   layers.Conv2D(32, 3, activation='relu'),
   layers.MaxPooling2D(),
   layers.Conv2D(32, 3, activation='relu'),
   layers.MaxPooling2D(),
   layers.Flatten(),
   layers.Dense(128, activation='relu'),
   layers.Dense(num_classes)
])

Code credit: https://www.tensorflow.org/tutorials/load_data/images

Output

A sequential model is built

Explanation

  • Buffered prefetching is used so that data can be yielded from the disk without the I/O blocking.
  • This is an important step while loading data.
  • The '.cache()' method helps keep the images in memory after they are loaded from the disk in the first epoch.
  • This ensures that the dataset doesn't become an obstruction while training the model.
  • If the dataset is too large to fit in memory, this same method can be used to create a performant on-disk cache.
  • The '.prefetch()' method overlaps the data pre-processing and model execution operations while the data is being trained.
raja
Published on 11-Feb-2021 06:34:36
Advertisements