How Does a Neural Network learn using Back Propagation?


A neural network is a sequence of algorithms that endeavors to identify basic relationships in a set of data through a process that mimics the method the human brain works. In this approach, neural networks represent systems of neurons, such as organic or artificial.

Neural Networks are analytic approaches modeled after the (hypothesized) procedure of learning in the cognitive system and the neurological services of the brain and capable of forecasting new observations (on definite variables) from other observations after executing a process of so-called learning from current information.

There are the following steps of Back Propagation which are as follows −

  • The network receives a training instance and, using the current weights in the network, it computes the output or outputs.

  • Back propagation computes the error by creating the difference between the computed result and the expected (actual result).

  • The error is fed back through the web and the weights are modified to minimize the error therefore the name back propagation because the errors are transmitted back through the network.

The back propagation algorithm computes the complete error of the network by comparing the values created on each training instances to the actual value. It can altered the weights of the output layer to decrease, but not remove, the error. However, the algorithm has not completed.

It can create the blame to previous nodes the network and alter the weights linking those nodes, moreover reducing overall error. The specific structure for assigning blame is not essential. Suffice it to say that back propagation needs a complex numerical procedure that needed taking limited derivatives of the activation function.

This methods for altering the weights is known as the generalized delta rule. There are two essential parameters related to using the generalized delta rule. The first is momentum, which defines the tendency of the weights within each unit to transform the “direction” they are heading in.

That is, each weight remembers if it has been receiving larger or smaller, and momentum attempt to maintain it going in the equal direction. A network with large momentum responds slowly to new training instances that required to reverse the weights. If momentum is low, therefore the weights are enabled to oscillate more openly.

The learning cost controls how fastly the weights change. The best method for the learning cost is to start big and reduce it slowly as the network is trained. Originally, the weights are random, therefore high oscillations are beneficial to get in the vicinity of the best weights. However, as the network gets nearer to the optimal solution, the learning cost must reduce so the network can fine-tune to the general optimal weights.

Ginni
Ginni

e

Updated on: 15-Feb-2022

258 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements