How can Keras be used in the training, evaluation and inference of the model?


Tensorflow is a machine learning framework that is provided by Google. It is an open-source framework used in conjunction with Python to implement algorithms, deep learning applications and much more. It is used in research and for production purposes. It has optimization techniques that help in performing complicated mathematical operations quickly.

The ‘tensorflow’ package can be installed on Windows using the below line of code −

pip install tensorflow

Keras was developed as a part of research for the project ONEIROS (Open ended Neuro-Electronic Intelligent Robot Operating System). Keras is a deep learning API, which is written in Python. It is a high-level API that has a productive interface that helps solve machine learning problems.

It is highly scalable, and comes with cross platform abilities. This means Keras can be run on TPU or clusters of GPUs. Keras models can also be exported to run in a web browser or a mobile phone as well.

Keras is already present within the Tensorflow package. It can be accessed using the below line of code.

import tensorflow
from tensorflow import keras

The Keras functional API helps create models that are more flexible in comparison to models created using sequential API. The functional API can work with models that have non-linear topology, can share layers and work with multiple inputs and outputs. A deep learning model is usually a directed acyclic graph (DAG) that contains multiple layers. The functional API helps build the graph of layers.

We are using Google Colaboratory to run the below code. Google Colab or Colaboratory helps run Python code over the browser and requires zero configuration and free access to GPUs (Graphical Processing Units). Colaboratory has been built on top of Jupyter Notebook. Following is the code snippet −

Example

print("Load the MNIST data")
print("Split data into training and test data")
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
print("Reshape the data for better training")
x_train = x_train.reshape(60000, 784).astype("float32") / 255
x_test = x_test.reshape(10000, 784).astype("float32") / 255
print("Compile the model")
model.compile(
   loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
   optimizer=keras.optimizers.RMSprop(),
   metrics=["accuracy"],
)
print("Fit the data to the model")
history = model.fit(x_train, y_train, batch_size=64, epochs=2, validation_split=0.2)
test_scores = model.evaluate(x_test, y_test, verbose=2)
print("The loss associated with model:", test_scores[0])
print("The accuracy of the model:", test_scores[1])

Code credit − https://www.tensorflow.org/guide/keras/functional

Output

Load the MNIST data
Split data into training and test data
Reshape the data for better training
Compile the model
Fit the data to the model
Epoch 1/2
750/750 [==============================] - 3s 3ms/step - loss: 0.5768 - accuracy: 0.8394 -
val_loss: 0.2015 - val_accuracy: 0.9405
Epoch 2/2
750/750 [==============================] - 2s 3ms/step - loss: 0.1720 - accuracy: 0.9495 -
val_loss: 0.1462 - val_accuracy: 0.9580
313/313 - 0s - loss: 0.1433 - accuracy: 0.9584
The loss associated with model: 0.14328785240650177
The accuracy of the model: 0.9584000110626221

Explanation

  • The input data (MNIST data) is loaded into the environment.

  • The data is split into training and test set.

  • The data is reshaped so that its accuracy gets better.

  • The model is built and compiled.

  • It is then fit to the training data.

  • The accuracy and loss associated with the training is displayed on the console.

Updated on: 18-Jan-2021

79 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements