How can Tensorflow be used to train the Illiad dataset using Python?


Tensorflow is a machine learning framework that is provided by Google. It is an open-source framework used in conjunction with Python to implement algorithms, deep learning applications and much more. It is used in research and for production purposes. It has optimization techniques that help in performing complicated mathematical operations quickly. This is because it uses NumPy and multi-dimensional arrays. These multi-dimensional arrays are also known as ‘tensors’. The framework supports working with deep neural network

The ‘tensorflow’ package can be installed on Windows using the below line of code −

pip install tensorflow

Tensor is a data structure used in TensorFlow. It helps connect edges in a flow diagram. This flow diagram is known as the ‘Data flow graph’. Tensors are nothing but a multidimensional array or a list.

We will be using the Illiad’s dataset, which contains text data of three translation works from William Cowper, Edward (Earl of Derby), and Samuel Butler. The model is trained to identify the translator when a single line of text is given. The text files used have been preprocessing. This includes removing the document header and footer, line numbers and chapter titles.

We are using Google Colaboratory to run the below code. Google Colab or Colaboratory helps run Python code over the browser and requires zero configuration and free access to GPUs (Graphical Processing Units). Colaboratory has been built on top of Jupyter Notebook.

Example

Following is the code snippet −

vocab_size += 2

print("Configure the dataset for better performance")
train_data = configure_dataset(train_data)
validation_data = configure_dataset(validation_data)

print("Train the model")
model = create_model(vocab_size=vocab_size, num_labels=3)
model.compile(
   optimizer='adam',
   loss=losses.SparseCategoricalCrossentropy(from_logits=True),
   metrics=['accuracy'])
print("Fit the training data to the model")
history = model.fit(train_data, validation_data=validation_data, epochs=3)

print("Finding the accuracy and loss associated with training")
loss, accuracy = model.evaluate(validation_data)

print("The loss is : ", loss)
print("The accuracy is : {:2.2%}".format(accuracy))

Code credit − https://www.tensorflow.org/tutorials/load_data/text

Output

Configure the dataset for better performance
Train the model
Fit the training data to the model
Epoch 1/3
697/697 [==============================] - 35s 17ms/step - loss: 0.6891 - accuracy: 0.6736 -
val_loss: 0.3718 - val_accuracy: 0.8404
Epoch 2/3
697/697 [==============================] - 8s 11ms/step - loss: 0.3149 - accuracy: 0.8713 -
val_loss: 0.3621 - val_accuracy: 0.8422
Epoch 3/3
697/697 [==============================] - 8s 11ms/step - loss: 0.2165 - accuracy: 0.9162 -
val_loss: 0.4002 - val_accuracy: 0.8404
Finding the accuracy and loss associated with training
79/79 [==============================] - 1s 2ms/step - loss: 0.4002 - accuracy: 0.8404
The loss is : 0.40021833777427673
The accuracy is : 84.04%

Explanation

  • The model is trained on the preprocessed, vectorized dataset.

  • Once this is done, it is compiled and fit into the model.

  • The loss and accuracy associated with the model are evaluated using the ‘evaluate’ method.

  • This data is displayed on the console.

Updated on: 19-Jan-2021

61 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements