How can MNIST data be used with TensorFlow for recognizing handwritten digits?

Machine LearningServer Side ProgrammingProgramming

Tensorflow is a machine learning framework that is provided by Google. It is an open−source framework used in conjunction with Python to implement algorithms, deep learning applications and much more. It is used in research and for production purposes.

The ‘tensorflow’ package can be installed on Windows using the below line of code −

pip install tensorflow

Tensor is a data structure used in TensorFlow. It helps connect edges in a flow diagram. This flow diagram is known as the ‘Data flow graph’. Tensors are nothing but multidimensional array or a list.

We are using the Google Colaboratory to run the below code. Google Colab or Colaboratory helps run Python code over the browser and requires zero configuration and free access to GPUs (Graphical Processing Units). Colaboratory has been built on top of Jupyter Notebook.

The MNIST dataset contains handwritten digits, wherein 60000 of them are used for training the model and 10000 of them are used to test the trained model. These digits have been size−normalized and centered to fit a fixed−size image.

Following is the code −


import tensorflow as tf
mnist = tf.keras.datasets.mnist
print("Data is being loaded")
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
   tf.keras.layers.Flatten(input_shape=(28, 28)),
   tf.keras.layers.Dense(128, activation='relu'),
predictions = model(x_train[:1]).numpy()
print("The predictions are : ")
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
print("The loss function has been defined")
loss_fn(y_train[:1], predictions).numpy()
metrics=['accuracy']), y_train, epochs=9)
print("The data is being fit to the model")
model.evaluate(x_test, y_test, verbose=2)
print("The results are being evaluated")
probability_model = tf.keras.Sequential([model,
tf.keras.layers.Softmax() ]) print("The predictions are : ") probability_model(x_test[:8])

Code credit −


Data is being loaded
The predictions are :
[[-0.77715474 -0.21606012 -0.04190525 -0.22804758 0.03612506 0.5986039
0.6838669 -0.40150493 0.55429333 0.55918723]]
The loss function has been defined
Epoch 1/9
1875/1875 [==============================] - 4s 2ms/step - loss: 0.4934 - accuracy: 0.8564
Epoch 2/9
1875/1875 [==============================] - 4s 2ms/step - loss: 0.1511 - accuracy: 0.9566
Epoch 3/9
1875/1875 [==============================] - 4s 2ms/step - loss: 0.1046 - accuracy: 0.9690
Epoch 4/9
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0861 - accuracy: 0.9733
Epoch 5/9
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0712 - accuracy: 0.9779
Epoch 6/9
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0621 - accuracy: 0.9798
Epoch 7/9
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0544 - accuracy: 0.9822
Epoch 8/9
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0505 - accuracy: 0.9837
Epoch 9/9
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0442 - accuracy: 0.9853
The data is being fit to the model
313/313 - 0s - loss: 0.0691 - accuracy: 0.9792
The results are being evaluated
The predictions are :
<tf.Tensor: shape=(8, 10), dtype=float32, numpy=
array([[5.02094943e-09, 9.46477272e-11, 2.06162738e-07, 4.83285694e-05,
4.39524014e-13, 1.28035786e-08, 2.14148154e-16, 9.99951243e-01,
7.93324517e-09, 2.28497953e-07],
[1.91046940e-12, 2.99942280e-06, 9.99997020e-01, 1.57965779e-10,
8.59648663e-21, 6.85132751e-09, 3.96876629e-11, 1.56446678e-15,
7.72609667e-12, 1.23343747e-16],
[2.50448129e-09, 9.99879003e-01, 4.76238492e-05, 2.03783550e-08,
8.07857646e-07, 9.36941333e-08, 1.20849165e-06, 5.47862328e-05,
1.65028414e-05, 1.13786543e-10],
[9.99895811e-01, 5.68212422e-10, 5.65115661e-05, 1.29087857e-06,
1.05537436e-06, 1.18774800e-07, 4.08086999e-05, 3.00701231e-06,
3.09539394e-09, 1.48017170e-06],
[2.68701024e-06, 8.04860432e-12, 2.38641114e-05, 1.81536635e-07,
9.98934567e-01, 8.67565220e-08, 6.51489245e-07, 2.03823347e-05,
2.35504160e-07, 1.01726933e-03],
[3.68854253e-11, 9.99950528e-01, 4.57934220e-07, 5.73739001e-10,
1.50253769e-07, 1.97563521e-10, 2.10510054e-09, 4.76488349e-05,
1.17310219e-06, 1.29714104e-11],
[5.41477474e-10, 2.30981789e-09, 2.14095284e-08, 1.05171161e-07,
9.97043908e-01, 5.71491137e-06, 4.81293283e-10, 4.48020655e-05,
6.65718471e-05, 2.83887982e-03],
[1.04743374e-10, 1.53262540e-08, 5.02297407e-05, 2.83393019e-04,
2.03316798e-04, 2.68098956e-05, 9.15681864e-10, 1.22959409e-05,
7.81168455e-06, 9.99416113e-01]], dtype=float32)>


  • The required packages are downloaded and aliased.

  • The MNIST dataset is downloaded from the source.

  • The dataset is split into training and testing data.

  • A sequential model is built using the ‘keras’ package.

  • The predictions are made on test dataset.

  • The loss function is defined using the ‘SparseCategoricalCrossentropy’ method present in ‘Keras’ package.

  • The model is compiled and then fit to the data.

  • This trained data is evaluated by using the test dataset.

  • The predictions are displayed on the console.

Updated on 19-Jan-2021 13:29:22