What is a Perceptron? What are its limitations? How can these limitations be overcome in Machine Learning?


The basic example of a neural network is a ‘perceptron’. It was invented by Frank Rosenblatt in 1957. The perceptron is a classification algorithm similar to logistic regression. This because, similar to logistic regression, a perceptron has weights, w, and an output function, ‘f’, which is a dot product of the weights and the input.

The only difference is that ‘f’ is a simple step function, where a logistic regression rule is applied to the output of the logistic function. On the other hand, perceptron can be understood as an example of a simple one-layer neural feedforward network.

The perceptron was considered as a promising form of network, but later it was discovered to have certain limitations. This was because perceptron worked only with linearly separable classes.

Some scientists even went on to discover and state that a perceptron didn’t even have the ability to learn a simple logical function like ‘XOR’. But other types of neural networks can be used to overcome this problem.

A multilayer perceptron that has multiple interconnected perceptron organized in different sequential layers would give good accuracy in certain situations. This would consist of an input layer, one or more hidden layers, and an output layer.

Every unit in a layer is connected to all units of the next layer. The information is passed to the input layer, and an activation function is used to get the output of that layer.

The output of one layer is passed as an input to the next layer, which is propagated further until the last layer. Hence the name ‘feedforward’ network was given.

Neural networks can be trained using gradient descent algorithm and backpropagation.

Updated on: 10-Dec-2020

3K+ Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements