According to computational complexity theory, the potential method is defined as a method implemented to analyze the amortized time and space complexity of a data structure, a measure of its performance over sequences of operations that eliminates the cost of infrequent but expensive operations.
In the potential method, a function Φ is selected that converts states of the data structure to non-negative numbers. If S is treated as state of the data structure, Φ(S) denotes work that has been accounted in the amortized analysis but not yet performed. Thus, Φ(S) may be imagined as calculating the amount of potential energy stored in that state. Before initializing a data structure, the potential value is defined to be zero. Alternatively, Φ(S) may be imagined as representing the amount of disorder in state S or its distance from an ideal state.
Let we can denote a potential function Φ (read "Phi") on states of a data structure satisfies the following properties −
Intuitively, the potential function will be able to keep track of the precharged time at any point in the computation. It measures amount of available saved-up time to pay for expensive operations. It is like a bank balance in the banker's method. But interestingly, it depends only on the present state of the data structure, whatever of the history of the computation that got it into that state.
We then define the amortized time of an operation as
c + Φ(a') − Φ(a),
where c is the original cost of the operation and a and a' are the states of the data structure before and after the operation, respectively. Thus the amortized time is defined as the actual time plus the change in potential. Ideally, Φ should be defined so that the amortized time of each operation should be small. Thus the change in potential should be measured as positive for low-cost operations and negative for high-cost operations.