Article Categories
- All Categories
-
Data Structure
-
Networking
-
RDBMS
-
Operating System
-
Java
-
MS Excel
-
iOS
-
HTML
-
CSS
-
Android
-
Python
-
C Programming
-
C++
-
C#
-
MongoDB
-
MySQL
-
Javascript
-
PHP
-
Economics & Finance
How can TensorFlow be used to define a loss function, an optimizer, train the model and evaluate it on the IMDB dataset in Python?
TensorFlow is a machine learning framework provided by Google. It is an open-source framework used in conjunction with Python to implement algorithms, deep learning applications and much more. It is used in research and for production purposes.
The IMDB dataset contains reviews of over 50,000 movies and is commonly used for Natural Language Processing tasks, particularly sentiment analysis. This tutorial demonstrates how to define a loss function, optimizer, train a model, and evaluate it using TensorFlow.
Installation
The 'tensorflow' package can be installed using the following command ?
pip install tensorflow
Model Configuration
Before training, the model needs to be compiled with a loss function, optimizer, and metrics. Here's how to configure these components ?
import tensorflow as tf
from tensorflow.keras import losses
# Compile the model with loss function, optimizer, and metrics
model.compile(
loss=losses.BinaryCrossentropy(from_logits=True),
optimizer='adam',
metrics=tf.metrics.BinaryAccuracy(threshold=0.0)
)
Components Explained
Loss Function:
BinaryCrossentropyis used for binary classification tasks like sentiment analysisOptimizer:
adamis an efficient gradient descent algorithmMetrics:
BinaryAccuracymeasures the percentage of predictions that match the binary labels
Training the Model
Once compiled, train the model using the training and validation datasets ?
# Set number of training epochs
epochs = 10
# Train the model
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
Model Evaluation
After training, evaluate the model's performance on the test dataset ?
# Evaluate model on test data
loss, accuracy = model.evaluate(test_ds)
print("Loss is :", loss)
print("Accuracy is :", accuracy)
Training Output
The training process shows progress for each epoch with loss and accuracy metrics ?
Epoch 1/10 625/625 [==============================] - 12s 19ms/step - loss: 0.6818 - binary_accuracy: 0.6130 - val_loss: 0.6135 - val_binary_accuracy: 0.7750 Epoch 2/10 625/625 [==============================] - 4s 7ms/step - loss: 0.5785 - binary_accuracy: 0.7853 - val_loss: 0.4971 - val_binary_accuracy: 0.8230 Epoch 3/10 625/625 [==============================] - 4s 7ms/step - loss: 0.4651 - binary_accuracy: 0.8372 - val_loss: 0.4193 - val_binary_accuracy: 0.8470 Epoch 4/10 625/625 [==============================] - 4s 7ms/step - loss: 0.3901 - binary_accuracy: 0.8635 - val_loss: 0.3732 - val_binary_accuracy: 0.8612 Epoch 5/10 625/625 [==============================] - 4s 7ms/step - loss: 0.3435 - binary_accuracy: 0.8771 - val_loss: 0.3444 - val_binary_accuracy: 0.8688 Epoch 6/10 625/625 [==============================] - 4s 7ms/step - loss: 0.3106 - binary_accuracy: 0.8877 - val_loss: 0.3255 - val_binary_accuracy: 0.8730 Epoch 7/10 625/625 [==============================] - 5s 7ms/step - loss: 0.2855 - binary_accuracy: 0.8970 - val_loss: 0.3119 - val_binary_accuracy: 0.8732 Epoch 8/10 625/625 [==============================] - 5s 7ms/step - loss: 0.2652 - binary_accuracy: 0.9048 - val_loss: 0.3027 - val_binary_accuracy: 0.8772 Epoch 9/10 625/625 [==============================] - 5s 7ms/step - loss: 0.2481 - binary_accuracy: 0.9125 - val_loss: 0.2959 - val_binary_accuracy: 0.8782 Epoch 10/10 625/625 [==============================] - 5s 7ms/step - loss: 0.2328 - binary_accuracy: 0.9161 - val_loss: 0.2913 - val_binary_accuracy: 0.8792 782/782 [==============================] - 10s 12ms/step - loss: 0.3099 - binary_accuracy: 0.8741 Loss is : 0.3099007308483124 Accuracy is : 0.8741199970245361
Key Observations
Training loss decreases from 0.68 to 0.23 over 10 epochs
Training accuracy improves from 61% to 91%
Final test accuracy reaches 87.4%
Validation metrics help monitor overfitting during training
Conclusion
TensorFlow's compile(), fit(), and evaluate() methods provide a complete workflow for training neural networks. The model achieved 87.4% accuracy on the IMDB sentiment classification task, demonstrating effective learning from the training data.
