Article Categories
- All Categories
-
Data Structure
-
Networking
-
RDBMS
-
Operating System
-
Java
-
MS Excel
-
iOS
-
HTML
-
CSS
-
Android
-
Python
-
C Programming
-
C++
-
C#
-
MongoDB
-
MySQL
-
Javascript
-
PHP
-
Economics & Finance
How can a DNN (deep neural network) model be built on Auto MPG dataset using TensorFlow?
TensorFlow is a machine learning framework provided by Google. It is an open-source framework used in conjunction with Python to implement algorithms, deep learning applications and much more. It is used in research and for production purposes.
A tensor is a data structure used in TensorFlow that helps connect edges in a flow diagram known as the 'Data flow graph'. Tensors are multidimensional arrays or lists that store data.
The dataset we use is called the 'Auto MPG' dataset. It contains fuel efficiency data of 1970s and 1980s automobiles with attributes like weight, horsepower, displacement, and so on. Our goal is to predict the fuel efficiency of specific vehicles using a Deep Neural Network (DNN).
Building the DNN Model
First, let's import the necessary libraries and prepare our data ?
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Load the Auto MPG dataset
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower', 'Weight',
'Acceleration', 'Model Year', 'Origin']
dataset = pd.read_csv(url, names=column_names, na_values='?',
comment='\t', sep=' ', skipinitialspace=True)
# Clean the data
dataset = dataset.dropna()
dataset['Origin'] = dataset['Origin'].map({1: 'USA', 2: 'Europe', 3: 'Japan'})
# Split features and labels
train_dataset = dataset.sample(frac=0.8, random_state=0)
test_dataset = dataset.drop(train_dataset.index)
train_features = train_dataset.copy()
test_features = test_dataset.copy()
train_labels = train_features.pop('MPG')
test_labels = test_features.pop('MPG')
print("Dataset shape:", dataset.shape)
print("Training features shape:", train_features.shape)
Creating the DNN Model
Now we'll build a deep neural network with multiple hidden layers ?
# Create a DNN model with multiple layers
def build_dnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu', input_shape=[1]),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1)
])
model.compile(loss='mean_absolute_error',
optimizer=tf.keras.optimizers.Adam(0.001))
return model
# Build the model using only Horsepower feature
dnn_horsepower_model = build_dnn_model()
print("DNN Model Summary:")
dnn_horsepower_model.summary()
Training the Model
Let's train the DNN model and track the training progress ?
# Function to plot training history
def plot_loss(history):
plt.figure(figsize=(10, 6))
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Error [MPG]')
plt.legend()
plt.grid(True)
plt.title('Model Training Progress')
plt.show()
# Train the model
print("Training DNN model...")
history = dnn_horsepower_model.fit(
train_features['Horsepower'], train_labels,
validation_split=0.2,
verbose=0, epochs=100)
print("Training completed!")
print("Error with respect to every epoch:")
plot_loss(history)
Making Predictions
Now let's make predictions and visualize the results ?
# Function to plot predictions vs actual values
def plot_horsepower(x, y):
plt.figure(figsize=(10, 6))
plt.scatter(train_features['Horsepower'], train_labels, alpha=0.5, label='Training Data')
plt.plot(x, y, color='red', linewidth=2, label='DNN Predictions')
plt.xlabel('Horsepower')
plt.ylabel('MPG')
plt.legend()
plt.grid(True)
plt.title('DNN Model Predictions vs Training Data')
plt.show()
# Make predictions
x = tf.linspace(0.0, 250, 251)
y = dnn_horsepower_model.predict(x)
plot_horsepower(x, y)
# Evaluate the model
test_results = {}
test_results['dnn_horsepower_model'] = dnn_horsepower_model.evaluate(
test_features['Horsepower'], test_labels, verbose=0)
print(f"Test Loss: {test_results['dnn_horsepower_model']:.2f} MPG")
Model Performance
The DNN model learns complex patterns in the data through its multiple hidden layers. Here's what happens during training:
The model has two hidden layers with 64 neurons each, using ReLU activation
It uses the Adam optimizer with mean absolute error as the loss function
Training data is split with 20% used for validation during training
The model learns to map horsepower values to fuel efficiency (MPG) predictions
Conclusion
This DNN model successfully learns the relationship between horsepower and fuel efficiency using multiple hidden layers. The deep architecture allows the model to capture complex non-linear patterns in the Auto MPG dataset, providing accurate predictions for vehicle fuel efficiency.
