Types of Learning Rules in ANN


ANN or artificial neural networks are the computing systems developed by taking inspiration from the biological neural networks; the human brain being its major unit. These neural networks are made functional with the help of training that follows some kind of a learning rule.

A learning rule in ANN is nothing but a set of instructions or a mathematical formula that helps in reinforcing a pattern, thus improving the efficiency of a neural network. There are 6 such learning rules that are widely used by neural networks for training.

Hebbian Learning Rule

Developed by Donald Hebb in 1949, Hebian learning is an unsupervised learning rule that works by adjusting the weights between two neurons in proportion to the product of their activation.

According to this rule, the weight between two neurons is decreased if they are working in opposite directions and vice-versa. However, if there is no correlation between the signals, then the weight remains the same. As for the sign a weight carries, it is positive when both the nodes are either positive or negative. However, in case of a one node being either positive or negative, the weight carries a negative sign.

Formula

Δw = αxiy

where,

  • Δw is the change in weight

  • α is the learning rate

  • xi is the input vector and,

  • y is the output

Perceptron Learning Rule

Developed by Rosenblatt, the perceptron learning rule is an error correction rule, used in a single layer feed forward network. Like Hebbian learning, this also, is a supervised learning rule.

This rule works by finding the difference between the actual and desired outputs and adjusts the weights according to that. Naturally, the rule requires the input of a set of vectors, along with the weights so that it can produce an output.

Formula

w = w + η(y -ŷ)x

  • w is the weight

  • η is the learning rate

  • x is the input vector

  • y is the actual label of the input vector

  • ŷ is the predicted label of the input vector

Correlation Learning Rule

With a similar principle as the Hebbian rule, the correction rule also increases or decreases the weights based on the phases of the two neurons.

If the neurons are in the opposite phase to each other, the weight should be towards the negative side and if they are in the same phase to each other, the weight should be towards the positive side. The only thing that makes this rule different from the Hebbian learning rule is that it is supervised in nature.

Formula

Δw = αxitj

  • Δw is the change in weight

  • α is the learning rate

  • xi is the input vector and,

  • tj is the target value

Competitive Learning Rule

The competitive learning rule is unsupervised in nature and as the name says, is based on the principle of competition amongst the nodes. That is why it is also known as the ‘Winner takes it all’ rule.

In this rule, all the output nodes represent the input pattern and the best one, meaning, the one having the most number of outputs is the winner. This winner node is then assigned the value 1 and the others that lose, remain at 0. Naturally, only one neuron remains active at once.

Formula

∆w_ij = η * (x_i - w_ij)

where,

  • ∆w_ij is the changes in weight between the ith input neuron and jth output neuron

  • η is the learning rate

  • x_i is the input vector

  • w_ij is the weight between the ith input neuron and jth output neuron

Delta Learning Rule

Developed by Bernard Widrow and Marcian Hoff, Delta learning rule is a supervised learning rule with a continuous activation function.

The main aim of this rule is to minimize error in the training patterns and thus, it is also known as the least mean square method. The principle used here is that of an infinite gradient descent and the changes in weight is equal to the product of the error and the input.

Formula

Δw_ij = η * (d_j - y_j) * f'(h_j) * x_i

where,

  • ∆w_ij is the changes in weight between the ith input neuron and jth output neuron

  • η is the learning rate

  • d_j is the target output

  • y_j is the actual output

  • f'(h_j) is the derivative of the activation function

  • x_i is the ith input

Out Star Learning Rule

Developed by Grossberg, Out Star learning is a supervised learning rule that works with a pair of nodes arranged in a network layer.

According to this rule, the weights linked to a given node are equal to target output along that particular link. Also, there is a single output layer of neurons, where each neuron is a cluster of input vectors. This algorithm is mostly used for patterns and clustering related tasks.

Formula

Δw_ij = η * (x_i - w_j) * f(w_j)

where,

  • Δw_ij is the change in weight

  • η is the learning rate

  • x_i is the input pattern

  • w_j is the weight vector

  • f(w_j) is the activation function

Conclusion

Although each learning rule in Artificial neural networks has its own advantages and disadvantages, the one rule that you should use depends on the type of task and data that is in use. Places that find good use of ANN learning rules include image recognition, pattern matching, predictive modeling and natural language processing, etc. Other ANN learning rules that one can explore include Backpropagation, Q-Learning, Hopfield learning and Evolutionary algorithms, etc.

Updated on: 07-Aug-2023

2K+ Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements