# Difference Between Greedy Method and Dynamic Programming

Analysis of AlgorithmsAlgorithmsGreedy AlgorithmMisc Algorithms

In this post, we will understand the differences between the greedy algorithm and dynamic programming methods.

## Greedy algorithm

It is an algorithmic paradigm that builds up on a solution in parts, step by step. The next step is chosen such that it gives the most obvious and immediate benefit.

• Problems that involve choosing local optimal values will help in choosing the global optimal values/solution to the problem. Such ate the problems associated with greedy algorithm.
• There is no surety that a greedy algorithm would lead to an optimal solution.
• An optimal choice is made at every stage of the problem, i.e the local optimal solution.
• It is efficient in terms of memory usage since there is no question of having to go back or change previous solutions/values.
• In general, they are quick in comparison to dynamic programming techniques.
• Example: Dijkstra's shortest path algorithm that takes O(ELogV + VLogV) time.
• The solution in a greedy algorithm is computed in a forward method, never visiting the previous values/solutions or changing them.

## Dynamic Programming

It is an optimization technique that helps store the result of sub-problems so that they don't need to be re-computed when need in the future. They can just be extracted from the pre-computed set. It reduces the time complexity from exponential to polynomial complexity.

• For example: A recursive solution can be turned into a dynamic programming problem by computing.
• In this, the decision made at every step is done by considering the current problem in hand, and the solution to previously solved sum-problem. This will be used to calculate the optimal value/solution.
• It is guaranteed that a dynamic programming problem's solution would be an optimal one.
• Here, the optimal solution chosen is a globally optimal one. It uses certain formula which would have been used to store previously calculated state values.
• The dynamic programming table is required for memorization. This increases the memory complexity.
• It is comparatively slower.
• Example: Bellman Ford algorithm that takes O(VE) time.
• Dynamic programming determines the solution using a bottom up or top down approach, by developing from smaller problems that have optimal solutions.