Find the probability of a state at a given time in a Markov chain - Set 1 in Python

A Markov chain is a random process where the probability of moving to the next state depends only on the current state. We can represent it as a directed graph where nodes are states and edges have transition probabilities. To find the probability of reaching state F at time T starting from state S, we use dynamic programming.

Problem Statement

Given a Markov chain with N states, we need to find the probability of reaching state F at time T if we start from state S at time 0. Each transition takes one unit of time, and the sum of outgoing probabilities from any state equals 1.

Algorithm

We'll use a 2D table where table[state][time] represents the probability of being in a particular state at a given time ?

  • Create a matrix table of size (N+1) × (T+1) filled with 0.0

  • Set table[S][0] = 1.0 (100% probability of starting at state S)

  • For each time step from 1 to T:

    • For each state j from 1 to N:

      • For each incoming edge k to state j:

        • Add probability × previous_state_probability to current state

  • Return table[F][T]

Example

Let's implement this algorithm to find the probability ?

def get_probability(G, N, F, S, T):
    # Create 2D table for dynamic programming
    table = [[0.0 for j in range(T+1)] for i in range(N+1)]
    
    # Initial probability: 100% chance of being at starting state S at time 0
    table[S][0] = 1.0
    
    # Fill the table for each time step
    for i in range(1, T+1):
        for j in range(1, N + 1):
            # For each outgoing edge from previous states to current state j
            for k in G[j]:
                # k[0] is the source state, k[1] is the transition probability
                table[j][i] += k[1] * table[k[0]][i - 1]
    
    return table[F][T]

# Define the Markov chain graph
# graph[i] contains list of tuples (source_state, probability) for state i
graph = []
graph.append([])  # State 0 (unused)
graph.append([(2, 0.09)])  # State 1
graph.append([(1, 0.23), (6, 0.62)])  # State 2
graph.append([(2, 0.06)])  # State 3
graph.append([(1, 0.77), (3, 0.63)])  # State 4
graph.append([(4, 0.65), (6, 0.38)])  # State 5
graph.append([(2, 0.85), (3, 0.37), (4, 0.35), (5, 1.0)])  # State 6

N = 6  # Number of states
S, F, T = 4, 2, 100  # Start state, Final state, Time

probability = get_probability(graph, N, F, S, T)
print(f"Probability of reaching state {F} at time {T}: {probability}")
Probability of reaching state 2 at time 100: 0.28499144801478526

How It Works

The algorithm uses dynamic programming where each cell table[state][time] stores the probability of being in that state at that specific time. We build this table incrementally:

  1. Initialization: Set probability 1.0 for starting state at time 0

  2. Transition: For each time step, calculate probabilities by summing contributions from all possible previous states

  3. Result: The final answer is the probability at the target state and target time

Time Complexity

The time complexity is O(T × N × E) where T is the time steps, N is the number of states, and E is the average number of incoming edges per state. Space complexity is O(T × N) for the DP table.

Conclusion

This dynamic programming approach efficiently calculates Markov chain state probabilities by building a transition table. The method works for any time horizon and handles complex state transition networks with multiple incoming edges per state.

Updated on: 2026-03-25T09:55:29+05:30

463 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements