Artificial Neural Network for XNOR Logic Gate with 2-bit Binary Input


Introduction

Artificial Neural Networks (ANNs) are effective computational models propelled by the human brain's neural structure. They have found broad applications in different areas, counting design acknowledgment, picture handling, and decision−making frameworks. In this article, we are going investigate the execution of an Artificial Neural Network for the XNOR logic gate with 2−bit parallel input. We'll examine the concept of XNOR logic gates, the structure of a manufactured neural organize, and the preparation to prepare for accomplishing exactly what comes about it.

XNOR Gate

The XNOR logic gate may be a principal logic gate that produces a high output as it were when the number of tall inputs is true. In other words, in case both inputs are the same, the yield is tall. If the inputs are distinctive, the yield is low.

The truth table for the XNOR gate with 2−bit binary inputs is as follows:

Input A

Input B

Output

0

0

1

0

1

0

1

0

0

1

1

1

Implementation of ANN for XNOR Logic Gate

Algorithm

Step 1 :Import the numpy library as np.

Step 2 :Characterize the sigmoid activation function:

Actualize the sigmoid function that takes an input x and returns the sigmoid enactment value.

Step 3 :Characterize the derivative of the sigmoid function:

Execute the sigmoid_derivative function that takes an input x and returns the subsidiary of the sigmoid enactment.

Step 4 :Characterize the input and output information:

  • Make a numpy array of inputs speaking to the input information for the XNOR logic gate.

  • Create a numpy cluster of yields speaking to the comparing wanted yields.

Step 5 :Initialize the weights and set the arbitrary seed:

  • Utilize np.random.rand() to initialize the weights weights_1 and weights_2.

  • Set the arbitrary seed utilizing np.random.seed() to get reliable comes about amid preparation.

Step 6 :Set the learning rate and number of epochs:

  • Allot a learning rate value, showing the step estimate for weight updates.

  • Set the number of ages, characterizing the number of cycles for preparing.

Step 7 :Preparing the neural network:

  • Utilize a for loop to emphasize the desired number of ages.

  • Calculate the weighted whole of inputs and weights for the covered−up layer.

  • Apply the sigmoid enactment work to the covered-up layer yield.

  • Calculate the weighted whole of covered-up layer yields and weights for the yield layer.

  • Apply the sigmoid enactment work to get the anticipated yield.

  • Calculate the mistake between the anticipated yield and the specified yield.

Step 8 :Perform backpropagation:

  • Calculate the subsidiary of the yield layer enactment function.

  • Overhaul the output layer weights utilizing the subordinate and the covered−up layer output.

  • Calculate the covered−up layer mistake utilizing the yield layer mistake and weights.

  • Calculate the subordinate of the covered−up layer actuation function.

  • Upgrade the covered−up layer weights utilizing the subsidiary and the input information.

Step 9 :Testing the neural organize:

  • Utilize the trained weights to anticipate the yield for each input within the inputs array.

  • Repeat through the inputs and print the anticipated yield utilizing np.round() to circular the values.

Example

import numpy as np
def sigmoid(x):
    return 1 / (1 + np.exp(-x))

def sigmoid_derivative(x):
    return x * (1 - x)

inputs = np.array([[0, 0],
                   [0, 1],
                   [1, 0],
                   [1, 1]])

outputs = np.array([[1],
                    [0],
                    [0],
                    [1]])


np.random.seed(42)


weights_1 = np.random.rand(2, 4)
weights_2 = np.random.rand(4, 1)


learning_rate = 0.1
epochs = 10000


for epoch in range(epochs):
    
    hidden_layer_input = np.dot(inputs, weights_1)
    hidden_layer_output = sigmoid(hidden_layer_input)

    output_layer_input = np.dot(hidden_layer_output, weights_2)
    output = sigmoid(output_layer_input)

    
    error = outputs - output
    d_output = error * sigmoid_derivative(output)

    hidden_error = d_output.dot(weights_2.T)
    d_hidden = hidden_error * sigmoid_derivative(hidden_layer_output)

    
    weights_2 += hidden_layer_output.T.dot(d_output) * learning_rate
    weights_1 += inputs.T.dot(d_hidden) * learning_rate


hidden_layer = sigmoid(np.dot(inputs, weights_1))
predicted_output = sigmoid(np.dot(hidden_layer, weights_2))


for i in range(len(inputs)):    
    print(f"Input: {inputs[i]}, Predicted Output: {np.round(predicted_output[i][0])}")
 

Output

Input: [0 0], Predicted Output: 1.0 
Input: [0 1], Predicted Output: 0.0 
Input: [1 0], Predicted Output: 0.0 
Input: [1 1], Predicted Output: 1.0 

Conclusion

The execution of an Artificial Neural Network for the XNOR logic gate with a 2−bit double input exhibits the control and flexibility of neural networks in tackling consistent issues. The combination of ANNs and XNOR gates has noteworthy suggestions in computer design, advanced circuit planning, and other ranges requiring complex consistent computations. With further advancements in neural organize investigation and optimization calculations, able to anticipate indeed more modern usage of logic gates utilizing fake neural systems in the future.

Updated on: 26-Jul-2023

160 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements