- Related Questions & Answers
- How can Tensorflow be used to reduce overfitting using a dropout in the network?
- How can Tensorflow and re-trained model be used for data augmentation?
- How can Tensorflow and pre-trained model be used to chain data augmentation, rescaling and base model?
- How can Tensorflow be used to compose layers using Python?
- How can Tensorflow be used to compile and fit the model using Python?
- How can Tensorflow be used to add two matrices using Python?
- How can Tensorflow be used to multiply two matrices using Python?
- How can Tensorflow be used to visualize the data using Python?
- How can Tensorflow be used to standardize the data using Python?
- How can Tensorflow be used to compile the model using Python?
- How can Tensorflow be used to train the model using Python?
- How can Tensorflow be used to visualize training results using Python?
- How can Tensorflow be used to instantiate an estimator using Python?
- How can Tensorflow be used to decode the predictions using Python?
- How can Tensorflow be used to build normalization layer using Python?

- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who

Augmentation can be used to reduce overfitting by adding additional training data. This is done by creating a sequential model that uses a ‘RandomFlip’ layer.

**Read More:**
What is TensorFlow and how Keras work with TensorFlow to create Neural Networks?

We will use the Keras Sequential API, which is helpful in building a sequential model that is used to work with a plain stack of layers, where every layer has exactly one input tensor and one output tensor.

A neural network that contains at least one layer is known as a convolutional layer. We can use the Convolutional Neural Network to build learning model.

An image classifier is created using a keras.Sequential model, and data is loaded using preprocessing.image_dataset_from_directory. Data is efficiently loaded off disk. Overfitting is identified and techniques are applied to mitigate it. These techniques include data augmentation, and dropout. There are images of 3700 flowers. This dataset contaisn 5 sub directories, and there is one sub directory per class. They are:

daisy, dandelion, roses, sunflowers, and tulips.

We are using the Google Colaboratory to run the below code. Google Colab or Colaboratory helps run Python code over the browser and requires zero configuration and free access to GPUs (Graphical Processing Units). Colaboratory has been built on top of Jupyter Notebook.

When the number of training examples is small, the model learns from noises or unwanted details from training examples. This negatively impacts the performance of the model on new examples.

Due to overfitting, the model will not be able to generalize well on the new dataset. There are many ways in which overfitting can be avoided. We will use data augmentation to overcome overfitting. Data augmentation generates additional training data from the existing examples by augmenting them with the help of random transformations that would yield believable-looking images.

This helps expose the model to more aspects of the data, thereby helping the model generalize better. Following is an example:

print("Using data augmentation to eliminate overfitting") data_augmentation = keras.Sequential( [ layers.experimental.preprocessing.RandomFlip("horizontal",input_shape=(img_height, img_width,3)), layers.experimental.preprocessing.RandomRotation(0.1), layers.experimental.preprocessing.RandomZoom(0.1), ] )

Code credit −https://www.tensorflow.org/tutorials/images/classification

Using data augmentation to eliminate overfitting

Data augmentation can be implemented using layers from tf.keras.layers.experimental.preprocessing.

These layers are included inside the model like other layers, and run on the GPU.

This is done to eliminate or reduce overfitting.

Advertisements