Artificial Neural Network for XOR Logic Gate with 2-bit Binary Input


Introduction

Artificial Neural Networks (ANNs) have risen as effective apparatuses within the field of machine learning, permitting us to unravel complex issues that were once considered challenging for conventional computational strategies. Among these issues is the XOR logic gate, a fundamental example that highlights the nonlinearity of certain consistent operations. XOR gates have two binary inputs and produce a yield that's genuine as it were when the inputs are different. In this article, we'll investigate how to actualize an artificial neural network particularly planned to illuminate the XOR problem with 2−bit binary inputs.

Understanding XOR Logic Gate

The XOR (Exclusive OR) logic gate operates on two double inputs, creating a genuine yield in case the inputs are diverse and an untrue yield in case they are the same.

The Truth Table for a 2−bit XOR Gate is as follows:

Input A

Input B

Output

0

0

0

0

1

1

1

0

1

1

1

0

As able to see from the truth table, the XOR gate's output cannot be spoken to by a single linear equation. This nonlinearity makes it a challenging issue for traditional computational strategies. Be that as it may, artificial neural networks excel at fathoming such nonlinear issues.

Implementation of ANN for XOR Logic gate

Algorithm

Step 1 : Initialize the inputs and compare yields for the XOR logic gate.

Step 2 : Characterize the sigmoid actuation function and its subordinate sm_derivative(x).

Step 3 : Initialize the weights randomly with a mean of 0. The weights are lattices that interface the neurons in totally different layers.

Step 4 : Set the number of preparing iterations.

Step 5 : Start the preparing loop for epoch in the range.

Step 6 : Perform forward engendering

  • Set l0 as the input layer.

  • Compute l1 as the result of applying the sigmoid enactment work to the speck item of l0 and w0.

  • Compute l2 as the result of applying the sigmoid actuation function to the speck item of l1 and w1.

Step 7 : Perform backpropagation

  • Compute the mistake between the anticipated output and the real output.

  • Compute layer2_delta by duplicating layer2_error with the subordinate of the sigmoid work connected to l2.

  • Compute the blunder within the covered-up layer by taking the item of layer2_delta and the transpose of w1.

  • Compute layer1_delta by duplicating layer1_error with the subsidiary of the sigmoid work connected to l1.

Step 8 : Upgrade the weights

  • Upgrade w1 by including the speck item of the transpose of l1 and layer2_delta.

  • Upgrade w0 by including the speck item of the transpose of l0 and layer1_delta.

  • Test the neural network by giving the test inputs to the prepared network.

Step 9 :Compute predicted_output by performing forward engendering on test_input utilizing the prepared weights. Apply the sigmoid function to predicted_output and circular the values to get the ultimate anticipated outputs.

Step 10 :Print the anticipated outputs for each input by emphasizing the test inputs and comparing anticipated outputs.

Example

#import the required module
import numpy as np

#define a sigmoid function
def sm(x):
    return 1 / (1 + np.exp(-x))

def sm_derivative(x):
    return x * (1 - x)

inputs = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
outputs = np.array([[0], [1], [1], [0]])

np.random.seed(42)


w0 = 2 * np.random.random((2, 4)) - 1
w1 = 2 * np.random.random((4, 1)) - 1


epochs = 10000


for epoch in range(epochs):
    
    l0 = inputs
    l1 = sm(np.dot(l0, w0))
    l2 = sm(np.dot(l1, w1))
    

    layer2_error = outputs - l2
    layer2_delta = layer2_error * sm_derivative(l2)
    layer1_error = layer2_delta.dot(w1.T)
    layer1_delta = layer1_error * sm_derivative(l1)
    
    
    w1 += l1.T.dot(layer2_delta)
    w0 += l0.T.dot(layer1_delta)


ti = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
pr = sm(np.dot(sm(np.dot(ti, w0)), w1))
pr = np.round(pr)


for i in range(len(ti)):
    print(f"Input: {ti[i]}, Estimated Output: {pr[i]}")

Output

Input: [0 0], Estimated Output: [0.] 
Input: [0 1], Estimated Output: [1.] 
Input: [1 0], Estimated Output: [1.] 
Input: [1 1], Estimated Output: [0.] 

Conclusion

XOR neural systems give an establishment for understanding nonlinear issues and have applications past binary logic gates. They are competent in handling assignments such as picture acknowledgment and characteristic language processing. Be that as it may, their performance depends intensely on the quality and differences of the training information. Also, the complexity of the issue and the accessible computational resources must be considered when designing and preparing XOR networks. As inquiries about and headways in neural network models proceed, we can anticipate even more modern models to handle increasingly complex issues within the future.

Updated on: 26-Jul-2023

516 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements