Deep Neural Net with forward and Back Propagation


Introduction

Artificial intelligence and machine learning have experienced a transformation since to Deep Neural Networks (DNN), which have empowered exceptional progressions over a assortment of areas. In this article, we'll look at the thoughts of forward and backward propagation and how they relate to the advancement and advancement of advanced neural systems. Python librariеs likе TеnsorFlow havе incredibly streamlined thе execution of thе systеms, making thеm morе opеn to analysts and professionals.

Approach 1 : Tensorflow

In this approach, we utilize the control of the TensorFlow library to execute a profound neural arrange with forward and backpropagation. We characterize the design of the neural arrange utilizing the Keras API, compile the demonstrate with an optimizer and misfortune work, and prepare the demonstrate by fitting it to the preparing information. At last, we make forecasts on modern information utilizing the prepared show. TensorFlow gives a high−level interface that rearranges the execution handle and offers different optimization calculations and assessment measurements.

Algorithm

Step 1 : Characterize the design of the profound neural arrangement utilizing the Keras API.

Step 2 : Compile the demonstration by indicating the optimizer, and discretionary measurements.

Step 3 : Prepare the demonstration by fitting it to the prepared information, indicating the number of epochs and bunch estimates.

Step 4 : Assess the prepared show on the testing information to evaluate its execution.

Step 5 : Make forecasts on unused information utilizing the prepared demonstration.

Example

import numpy as num1
import tensorflow as tflow
from tensorflow import keras

num1.random.seed(42)
tflow.random.set_seed(42)

# Define the data
m = num1.array([[0, 0], [0, 1], [1, 0], [1, 1]])
n = num1.array([[0], [1], [1], [0]])

model = keras.Sequential([
    keras.layers.Dense(units=2, activation='sigmoid', input_shape=(2,)),
    keras.layers.Dense(units=1, activation='sigmoid')
])

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

model.fit(m, n, epochs=10000, verbose=0)

# Predict on new data
predictions = model.predict(m)
predictions = num1.round(predictions)
print('Predictions:')
print(predictions)

Output

1/1 [==============================] - 0s 84ms/step
Predictions:
[[0.]
 [1.]
 [1.]
 [0.]]

Approach 2 : Manual Implementation with NumPy

In this approach, we manually execute a deep neural network utilizing as it were the NumPy library. We characterize the sigmoid actuation work and its subsidiary. We initialize the weights and inclinations arbitrarily. At that point, we emphasize through a settled number of ages, performing forward proliferation and backpropagation to upgrade the weights and inclinations. At long last, we make expectations on unused information. This approach gives a clear understanding of the basic concepts but requires cautious execution of numerical conditions.

Algorithm

Step 1 :Define the sigmoid activation function.

Step 2 :Characterize the subordinate of the sigmoid work.

Step 3 :Initialize the weights and biases.

Step 4 :Set the learning rate and number of epochs.

Step 5 :Prepare the show utilizing forward propagation and back propagation.

Step 6 :Predict on unused information utilizing the prepared demonstrate.

Example

import numpy as num1

# Define the Status  function
def Status(x):
    return 1 / (1 + num1.exp(-x))

# Define the Status derivative of the Status function
def Status_derivative(x):
    return x * (1 - x)

X = num1.array([[0, 1], [0, 0], [1, 0], [1, 1]])
y = num1.array([[0], [1], [0], [0]])

num1.random.seed(42)

# Initialize the weights and biases
lib_Memo = 2
hid_Memo = 2
Ans_Memo = 1

Wt1 = num1.random.uniform(size=(lib_Memo, hid_Memo))
biases_inum1ut_hidden = num1.random.uniform(size=(1, hid_Memo))

Wt2 = num1.random.uniform(size=(hid_Memo, Ans_Memo))
biases_hidden_output = num1.random.uniform(size=(1, Ans_Memo))

# Set the learning rate and number of epochs
lernRate = 0.1
epochs = 10000

# Train the model
for epoch in range(epochs):
    # Forward propagation
    hidden_layer_inum1ut = num1.dot(X, Wt1) + biases_inum1ut_hidden
    hide_Memo_Out = Status(hidden_layer_inum1ut)
    
    output_layer_inum1ut = num1.dot(hide_Memo_Out, Wt2) + biases_hidden_output
    output_layer_output = Status(output_layer_inum1ut)
    
    # Backpropagation
    error = y - output_layer_output
    d_output = error * Status_derivative(output_layer_output)
    
    hidden_error = num1.dot(d_output, Wt2.T)
    d_hidden = hidden_error * Status_derivative(hide_Memo_Out)
    
    
    Wt2 += num1.dot(hide_Memo_Out.T, d_output) * lernRate
    biases_hidden_output += num1.sum(d_output, axis=0, keepdims=True) * lernRate
    
    Wt1 += num1.dot(X.T, d_hidden) * lernRate
    biases_inum1ut_hidden += num1.sum(d_hidden, axis=0, keepdims=True) * lernRate

# Predict on new data
hidden_layer_inum1ut = num1.dot(X, Wt1) + biases_inum1ut_hidden
hide_Memo_Out = Status(hidden_layer_inum1ut)

output_layer_inum1ut = num1.dot(hide_Memo_Out, Wt2) + biases_hidden_output
output_layer_output = Status(output_layer_inum1ut)

predictions = num1.round(output_layer_output)
print('Predictions:')
print(predictions)

Output

Predictions:
[[0.]
 [1.]
 [0.]
 [0.]]

Conclusion

Dееp neural nеtworks with forward and back propagation have revolutionized the field of machine learning. Ongoing rеsеarch and advancement within the field procееd to cеntеr on progrеssing thе proficiеncy and Interpretability of dееp nеural nеtworks, guarantееing thеir procееdеd victory and pеrtinеncе in fathoming rеal−world issues.

Updated on: 28-Jul-2023

118 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements