Perceptron Algorithm for NOR Logic Gate with 2-bit Binary Input


Introduction

Within the domain of artificial intelligence and machine learning, the Perceptron Algorithm has been demonstrated to be a principal building piece for neural networks. The NOR gate could be a flexible component because it can be utilized to construct more complex logic circuits and perform different logical operations. In this article, we investigate how the Perceptron Algorithm can be utilized to actualize the NOR logic gate utilizing 2−bit binary inputs. By understanding the hypothesis behind the Perceptron Algorithm and its application in creating NOR gates, we can open the potential for creating more complex neural organize designs.

Understanding NOR Logic Gate

The NOR logic gate could be a principal building piece in advanced logic circuits. It produces an output of "true" (1) as it were when both input bits are "false" (0). Something else, the yield is "false" (0).

The truth table for the NOR gate with 2-bit binary inputs is as follows:

Input A

Input B

Output

0

0

1

0

1

0

1

0

0

1

1

0

Implementation of NOR Logic Gate

To implement a NOR logic gate utilizing the Perceptron Calculation, we have to set suitable weights and edges. The preparation continues until the predicted yield matches the target yield for all conceivable input combinations. This iterative preparation guarantees that the perceptron learns to imitate the behavior of the NOR logic gate.

Algorithm Steps

Step 1 :Import the required module.

Step 2 :Initialize weights and inclinations haphazardly or with little values.

Step 3 :For each input test, calculate the weighted whole of inputs and include the bias.

Step 4 :For each input test, calculate the weighted whole of inputs and include the bias.

Step 5 :Compare the output with the target label and update the weights and inclination in a like manner.

Step 6: Repeat the method until the required accuracy or convergence is accomplished.

Example

import numpy as np


def step_function(x):
    return 1 if x >= 0 else 0


class PerceptronNOR:
    def __init__(self, input_size):
        self.weights = np.random.rand(input_size)
        self.bias = np.random.rand()

    def predict(self, inputs):
        summation = np.dot(inputs, self.weights) + self.bias
        return step_function(summation)

    def train(self, inputs, target_output, learning_rate=0.1, epochs=100):
        for epoch in range(epochs):
            total_error = 0
            for input_data, target in zip(inputs, target_output):
                prediction = self.predict(input_data)
                error = target - prediction
                total_error += abs(error)
                self.weights += learning_rate * error * input_data
                self.bias += learning_rate * error
            if total_error == 0:
                break


inputs = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])


target_output = np.array([1, 0, 0, 0])


nor_gate = PerceptronNOR(input_size=2)
nor_gate.train(inputs, target_output)


print("Testing Perceptron NOR gate:")
for input_data in inputs:
    output = nor_gate.predict(input_data)
    print(f"I/P: {input_data}, Output: {output}")

Output

Testing Perceptron NOR gate:
Input: [0 0], Output: 1
Input: [0 1], Output: 0
Input: [1 0], Output: 0
Input: [1 1], Output: 0

The Perceptron Algorithm

The Perceptron Algorithm could be a simple however foundational supervised learning calculation utilized for binary classification assignments. Proposed by Frank Rosenblatt in 1957, it imitates the working of a single neuron in a biological brain. The calculation is especially well−suited for straightly distinct datasets, where two classes can be unmistakably isolated by a straight line.

The calculation begins by initializing the weights and biases to small arbitrary values. These weights are duplicated by the input highlights, and the bias is included to compute the yield. The activation function, ordinarily a step work, decides whether the yield neuron fires or not.

Amid preparing, the perceptron iteratively overhauls its weights and inclination based on the prediction errors, employing a learning rate as a scaling calculation for upgrades. The method proceeds until the calculation focalizes, i.e., the weights and inclination have balanced to accurately map the input−output sets.

Despite its effortlessness, the Perceptron Algorithm has cleared the way for more modern neural network structures, laying the groundwork for cutting−edge profound learning strategies.

The Perceptron Algorithm could be an administered learning calculation utilized for binary classification issues. It is based on the concept of a disentangled artificial neuron called a perceptron. The perceptron takes different inputs, each duplicated by its individual weight, and produces a binary yield based on whether the weighted whole of inputs crosses a certain threshold or not.

Conclusion

The Perceptron Algorithm gives an effective system for actualizing logic gates, such as NOR gates, utilizing fake neural systems. By preparing a perceptron with the appropriate weights and limits, we can make a show that precisely replicates the behavior of the NOR logic gate. Understanding the Perceptron Algorithm and its application in logic gates opens the entryway to more complex neural network structures able of performing complex computations. By combining multiple perceptron and layers, we are able to create deep neural systems competent for fathoming complex issues. In conclusion, the Perceptron Algorithm enables us to saddle the control of NOR logic gates utilizing 2−bit binary inputs. Its potential lies not only in logic gate execution but moreover in creating more sophisticated neural network models for a wide extend of applications in machine learning and AI.

Updated on: 28-Jul-2023

298 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements