Perceptron Algorithm for AND Logic Gate with 2-bit Binary Input


Introduction

The Perceptron Algorithm, a foundation of artificial intelligence and machine learning, shapes the premise for different complex neural network designs. In this article, we investigate the application of the Perceptron Calculation to actualize the AND logic gate with 2−bit binary inputs. The AND gate, a principal parallel logic gate, produces a 1 yield as it were when both inputs are 1; something else, the yield is 0. Through a step−by−step clarification of the Perceptron Algorithm and Python code execution, we reveal how this calculation can be prepared to imitate the behavior of the AND gate.

Understanding the AND Logic Gate

Sometime recently plunging into the Perceptron Algorithm, let's briefly examine the AND logic gate. The AND entryway may be a principal binary logic gate that takes two binary inputs (0 or 1) and produces a yield of 1 as it were if both inputs are 1. Something else, the yield is 0. The truth table for the AND gate is as takes after:

Input A

Input B

Output

0

0

0

0

1

0

1

0

0

1

1

1

Implementation of AND Logic Gate

Preparing the Perceptron: To prepare the perceptron to act as an AND gate, we got to alter the weights and inclination utilizing a training dataset. For the AND door, we have four possible input combinations: (0,0), (0,1), (1,0), and (1,1). Our desired yields are 0, 0, 0, and 1, individually.

Algorithm

Step 1 :Initialize weights (W) and bias (b) with random values.

Step 2 :Give the input (A, B) and calculate the weighted sum.

Step 3 :Apply the actuation function to the weighted inputs.

Step 4 :Compare the yield with the specified output and calculate the error.

Step 5 :Alter the weights and bias using the Perceptron learning run the show.

Example

w1, w2, b = 0.5, 0.5, -1

def activate(x):
    return 1 if x >= 0 else 0

def train_perceptron(inputs, desired_outputs, learning_rate, epochs):
    global w1, w2, b  
    for epoch in range(epochs):
        total_error = 0
        for i in range(len(inputs)):
            A, B = inputs[i]
            target_output = desired_outputs[i]
            output = activate(w1 * A + w2 * B + b)
            error = target_output - output
            w1 += learning_rate * error * A
            w2 += learning_rate * error * B
            b += learning_rate * error
            total_error += abs(error)
        if total_error == 0:
            break


inputs = [(0, 0), (0, 1), (1, 0), (1, 1)]
desired_outputs = [0, 0, 0, 1]
learning_rate = 0.1
epochs = 100

train_perceptron(inputs, desired_outputs, learning_rate, epochs)

for i in range(len(inputs)):
    A, B = inputs[i]
    output = activate(w1 * A + w2 * B + b)
    print(f"Input: ({A}, {B})  Output: {output}")

Output

Input: (0, 0)  Output: 0
Input: (0, 1)  Output: 0
Input: (1, 0)  Output: 0
Input: (1, 1)  Output: 1

The Perceptron Algorithm

The Perceptron Algorithm could be a simple however foundational supervised learning calculation utilized for binary classification assignments. The calculation is especially well−suited for straightly distinct datasets, where two classes can be isolated by a straight line.

For the AND logic gate, which takes two parallel inputs and produces a binary yield based on a predefined truth table, the Perceptron Calculation can viably learn to form and rectify forecasts.

The calculation begins by initializing the weights and biases to small arbitrary values. These weights are duplicated by the input highlights, and the bias is included to compute the yield. The activation function, ordinarily a step work, decides whether the yield neuron fires or not.

Amid preparing, the perceptron iteratively overhauls its weights and inclination based on the prediction errors, employing a learning rate as a scaling calculation for upgrades. The method proceeds until the calculation focalizes, i.e., the weights and inclination have balanced to accurately map the input−output sets.

Despite its effortlessness, the Perceptron Algorithm has cleared the way for more modern neural network structures, laying the groundwork for cutting−edge profound learning strategies.

The Perceptron Algorithm could be an administered learning calculation utilized for binary classification issues. It is based on the concept of a disentangled artificial neuron called a perceptron. The perceptron takes different inputs, each duplicated by its individual weight, and produces a binary yield based on whether the weighted whole of inputs crosses a certain threshold or not.

For the AND logic gate execution, we'll consider two binary inputs, A and B. We are going to initialize weights (W) and bias (b) for the perceptron. The bias acts as a balanced, comparable to the captured term in linear regression. The actuation function utilized here is the Heaviside step function, which returns 1 in case the input is more prominent than or rises to zero; something else, it returns 0.

Conclusion

The Perceptron Algorithm gives a basic however successful way to actualize binary classification errands, such as the AND logic gate. The concept of the perceptron serves as a foundational building block for more complex neural networks utilized in cutting−edge machine learning applications. Understanding the Perceptron Calculation is vital for getting a handle on the essentials of artificial intelligence and its different applications in today's world.

Updated on: 28-Jul-2023

2K+ Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements