Perceptron Algorithm for XOR Logic Gate with 2-bit Binary Input


In the world of artificial intelligence, neural networks have emerged as a powerful tool for solving complex problems. One of its fundamental elements is the perceptron, a simple algorithm that forms the building block for more sophisticated neural network architectures. In this article, we dive into an extraordinary journey that leads us to unravel the mystery behind effectively implementing XOR logic gates using the perceptron algorithm with 2−bit binary inputs.

Perceptron Algorithm for XOR logic gate

Before we dive deep into our exploration, let's familiarize ourselves with one of computer science's classic challenges − understanding and replicating an XOR logic gate. Known as Exclusive OR, this logical function returns true if either but not both input values are true (1), otherwise returning false (0). Although seemingly straightforward when working with linearly separable data, traditional binary classifiers such as single−layer perceptron struggle to perform accurately on XOR−like problems due to non−linear decision boundaries.

Enter Perceptron Algorithm

Designed by Frank Rosenblatt in 1958, the perceptron algorithm revolutionized early AI research. It mimics biological neurons found in human brains while capitalizing on mathematical principles to make accurate predictions or decisions based on input patterns.

XOR Logic Gate Implementation in steps

To implement XOR logic using a two−layer perceptron architecture, we need to follow these steps:

Step 1: Define the input binary values (0 and 1) for all possible combinations. For XOR gate, the four inputs are (0, 0), (0, 1), (1, 0), and (1, 1).

Step 2: Assign initial random weights and bias values − As a starting point in training our perceptron algorithm, assigning random weights between −1 and +1 is customary.

Step 3: Train the Perceptron by adjusting weights accordingly − We iterate through inputs using stochastic gradient descent. By calculating predicted output via weighted sum with an activation function applied − typically utilizing a threshold−based step function or sigmoidal curve − comparisons can be made against truth tables to evaluate prediction accuracy.

Step 4: Evaluate Training Results − After multiple iterations of training data sampling and weight adjustments based on predictions versus expected outputs, check model performance by comparing resultant predictions against the actual XOR logic table.

Working of Algorithm

At its core, a perceptron consists of three components: input weights (w1 and w2), bias value (b), and an activation function.

  • Input Weights − These represent synaptic connections between neurons and determine their significance during information processing.

  • Bias Value − The introduction of biases enables adjustments for scenarios where inputs may be imbalanced or contain systematic errors.

  • Activation Function − This determines whether a neuron fires or remains dormant based on weighted sum calculations applied over inputs and bias.

Python code − Perceptron algorithm for XOR logic gate with 2−bit binary input

The perceptron algorithm is given using the Python code by implementing the XOR logic gate.

Algorithm

Step 1 :The numpy module is imported and the function is defined with one parameter as “x”.

Step 2 :Initialize the weights of x1 and x2 and set the bias value.

Step 3 :The weighted sum of the calculated weights is calculated.

Step 4 :Applying the step function to the weighted sum of the values.

Step 5 :The inputs are declared for the XOR gate, then the predicted output for each input is calculated.

Example

#importing the numpy module
import numpy as np

#defining the stepfun with one argument as “x”
def stepfun(a):
    return 1 if a >= 0 else 0

#main class is defined
class MainAlgorithm:
    def __init__(self):
        self.weights = np.array([7,-3])     
# Initializing Weights of two variables
        self.bias = -0.3                    
# Bias Value

    def predict(self,input_X):
        weighted_sum = np.dot(input_X,self.weights) + self.bias # Calculating Weighted Sum        
        output = stepfun(weighted_sum)                      
# Applying Step Function
        return output
perceptron = MainAlgorithm()

# The given input and Target Outputs for XOR Logic Gate
inputs = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
target_outputs_xor = np.array([0, 1, 1, 0])
#Defining the zip function with two arguments
for input1, desired_output in zip(inputs,target_outputs_xor):
    predicted_output = perceptron.predict(input1)
#Finally printing the output in the form of input, predicted, and desired output.
    print(f"Input: {input1} \nPredicted Output: {predicted_output} \nDesired Output: {desired_output}")

Output

Input: [0 0].
Predicted Output: 0 
Desired Output: 0
Input: [0 1].
Predicted Output: 0 
Desired Output: 1
Input: [1 0].
Predicted Output: 1 
Desired Output: 1
Input: [1 1].
Predicted Output: 1 
Desired Output: 0

Conclusion

The ability to comprehend how XOR gates operate within such frameworks offers invaluable insights into enhancing system decision−making capabilities. The application of the perceptron algorithm for revealing the inner workings of an XOR logic gate with two−bit binary input has taken us on an exciting journey through neural networks. By leveraging Rosenblatt's pioneering work from over half a century ago, today's researchers continue to build upon this foundation towards more advanced machine learning algorithms that can tackle complex problems.

Updated on: 28-Jul-2023

1K+ Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements