# How does backpropagation work?

Backpropagation defines the whole procedure encompassing both the computation of the gradient and its need in the stochastic gradient descent. Technically, backpropagation is used to calculate the gradient of the error of the network with respect to the network’s modifiable weights.

The characteristics of Backpropagation are the iterative, recursive and effective approach through which it computes the updated weight to enhance the network until it cannot perform the function for which it is being trained. Derivatives of the activation service to be known at web design time is needed to Backpropagation.

Backpropagation is generally used in neural network training and computes the loss function concerning the weights of the network. It functions with a multi-layer neural network and observes the internal representations of input-output mapping.

It is a standard form of artificial network training, which helps to calculate gradient loss function with respect to all weights in the network. The backpropagation algorithm is used to train a neural network more effectively through a chain rule method. It defines after each forward, the backpropagation implements backward pass through a web by adjusting the arguments of the model.

This gradient is used in simple stochastic gradient descent algorithm to find weights that minimizes the error. The error propagate backwards from the output nodes to the inner nodes.

Backpropagation understand by iteratively processing a data collection of training tuples, comparing the network’s indicator for every tuple with the actual known target value. The target value can be the known class label of the training tuple (for classification issues) or a continuous value (for prediction).

For each training tuple, the weights are modified so as to minimize the mean squared error between the network’s prediction and the actual target value. These modifications are made in the “backwards” direction, that is, from the output layer, through each hidden layer down to the first hidden layer (hence the name backpropagation). Although it is not protected, in general the weights will finally assemble, and the learning process end.

Types of Backpropagation

There are two types of Back propagation which are as follows −

Static Back Propagation − In this type of backpropagation, the static output is generated due to the mapping of static input. It can resolve static classification issues like optical character recognition.

Recurrent Backpropagation − The Recurrent Propagation is directed forward or conducted until a certain determined value or threshold value is reached. After the specific value, the bug is computed and propagated backward.

Advertisements