- Trending Categories
Data Structure
Networking
RDBMS
Operating System
Java
MS Excel
iOS
HTML
CSS
Android
Python
C Programming
C++
C#
MongoDB
MySQL
Javascript
PHP
Physics
Chemistry
Biology
Mathematics
English
Economics
Psychology
Social Studies
Fashion Studies
Legal Studies
- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who
What are the different kinds of gradient descent algorithms in Machine Learning?
The idea behind using gradient descent is to minimize the loss when in various machine learning algorithms. Mathematically speaking, the local minimum of a function is obtained.
To implement this, a set of parameters are defined, and they need to be minimized. Once the parameters are assigned coefficients, the error or loss is calculated. Next, the weights are updated to ensure that the error is minimized. Instead of parameters, weak learners can be users, such as decision trees.
Once the loss is calculated, gradient descent is performed, and tree is added to the algorithm step wise, so that loss is minimal.
Some examples includes coefficient parameters in linear regression or making sure that optimal weights are used in a machine learning algorithm.
There are different types of gradient descent algorithms and some of them have been discussed below.
Batch Gradient Descent
It is a type of gradient descent algorithm that processes all training data set for every iteration of the algorithm’s run.
If the number of training data is huge, batch gradient descent is computationally expensive. Hence, it wouldn’t be preferred to use batch gradient descent when the dataset is large.
In such cases, if the number of training examples is large, then stochastic gradient descent or mini-batch gradient descent is preferred.
Stochastic Gradient Descent
This algorithm processes one training sample in every iteration. The parameters get updated after every iteration since only one data sample is worked on in every iteration.
It is quicker in comparison to batch gradient descent. The overhead is high if the number of training samples in the dataset is large.
This is because the number of iterations would be high and the amount of time taken would also be high.
Mini Batch gradient descent
This gradient descent algorithm works better than batch gradient descent and stochastic gradient descent. Here, ‘b’ number of examples are processed in every iteration, where b<m.
The value ‘m’ refers to the total number of training examples in the dataset.The value ‘b’ is a value less than ‘m’. If the number of training examples is high, data is processed in batches, where every batch would contain ‘b’ training examples in one iteration.
Mini batch gradient descent works well with large training examples in reduced number of iterations.
- Related Articles
- What are the different learning styles in machine learning algorithms?
- What are different components of a machine learning algorithm?
- What are the applications of Machine Learning?
- What are the Classifications of Machine Learning?
- What are different kinds of sound?
- What are the different kinds of advertising frauds?
- What are business benefits of machine learning?
- What are the various challenges for machine learning practitioners?
- What is Machine Learning?
- Machine Learning – The Intelligent Machine
- Are machine learning algorithm patentable?
- What is a Machine Learning?
- What is Q-learning with respect to reinforcement learning in Machine Learning?
- What are the various useful components of the Python ecosystem for Machine Learning?
- What are Some Good Python Packages for Machine Learning?
