How can Tensorflow be used to build a normalization layer for the abalone dataset?

PythonServer Side ProgrammingProgrammingTensorflow

A normalization layer can be built using the ‘Normalization’ method present in the ‘preprocessing’ module. This layer is made to adapt to the features of the abalone dataset. In addition to this, a dense layer is added to improve the training capacity of the model. This layer will help pre-compute the mean and variance associated with every column. This mean and variance values will be used to normalize the data.

We will be using the abalone dataset, which contains a set of measurements of abalone. Abalone is a type of sea snail. The goal is to predict the age based on other measurements.

We are using the Google Colaboratory to run the below code. Google Colab or Colaboratory helps run Python code over the browser and requires zero configuration and free access to GPUs (Graphical Processing Units). Colaboratory has been built on top of Jupyter Notebook.

print("A normalization layer is being built")
normalize = preprocessing.Normalization()
print("A dense layer is being added")
norm_abalone_model = tf.keras.Sequential([
normalize,
layers.Dense(64),
layers.Dense(1)
])

Output

A normalization layer is being built
A dense layer is being added

Explanation

• The inputs to the model are normalized.
• This normalization can be incorporated by adding the 'experimental.preprocessing' layer.
• This layer will help pre-compute the mean and variance associated with every column.
• This mean and variance values are used to normalize the data.
• Firstly, the normalization layer is created using the 'Normalization.adapt' method.
• Only training data should be used with the 'adapt' method for preprocessing layers.
• This normalization layer is used to build the model.
Published on 11-Feb-2021 06:46:01