Visualizing O(n) using Python


Introduction

Knowing the efficiency of algorithms is crucial in the domain of computer science and programming as it aids in creating software that is both optimized and fast−performing. Time complexity is an important notion in this context since it measures how the runtime of an algorithm changes as the input size grows. The commonly used time complexity class O(n) signifies an association of linearity between the input size and execution time.

Definition

Algorithmic complexity within computer science is the assessment of the resources, such as time and space utilization, required to operate an algorithm depending on its input size. To explain further, it supports our comprehension of how rapidly an algorithm will execute considering the size of its input. The primary notation employed to depict algorithmic complexity is Big O notation (O(n)).

Syntax

for i in range(n):
   # do something

A `for` loop that runs a specific set of instructions multiple times, indicated by the range from 0 to `n−1`, and carries out an operation or a group of operations inside the loop for every iteration. With 'n' representing the number of iterations.

In O(n) time complexity, as we increase the input size 'n', the execution time grows proportionally. As 'n' increases, the number of iterations and the time taken for the loop to complete will increase proportionally. Linear time complexity exhibits a directly proportional relationship between the input size and the execution time.

Any task or sequence of tasks within the loop can be performed without considering the input size 'n'. The primary aspect to note here is that the loop executes 'n' iterations, causing a linear time complexity.

Algorithm

  • Step 1: Initialize a sum variable as 0

  • Step 2: Traverse each element in the provided list

  • Step 3: Incorporate the element into the current sum value.

  • Step 4: The sum should be returned after finishing the loop.

  • Step 5: End

Approach

  • Approach 1: Plotting Time vs. Input Size

  • Approach 2: Plotting Operations vs Input Size

Approach 1: Plotting Time vs. Input Size

Example

import time
import matplotlib.pyplot as plt

def algo_time(n):
    sum = 0
    for i in range(n):
        sum += i
    return sum

input_sizes = []
execution_times = []

for i in range(1000, 11000, 1000):
    start_time = time.time()
    algo_time(i)
    end_time = time.time()
    input_sizes.append(i)
    execution_times.append(end_time - start_time)

plt.plot(input_sizes, execution_times)
plt.xlabel('Input Size')
plt.ylabel('Execution Time (s)')
plt.show()

Output

This code is implemented to measure the running time of the `algo_time()` algorithm with varying input sizes. We will store the input sizes we wish to test along with their respective execution times in these lists.

A 'for' loop is employed to iterate through a range of input sizes. In this situation, the loop will run from 1000 until right before reaching 11000, incrementing by steps of 1000. To elaborate further, we plan to assess the algorithm by varying 'n' values from 1000 up to 10000 in increments of 1000.

Inside the loop, we measure the execution time of the `algo_time()` function for each input size. To start tracking time, we employ `time. time()` before invoking the function, and promptly stop it once the function finishes running. We then store the duration in a variable named 'execution_time'.

We add both the input value ('n') and its corresponding execution time to their respective lists ('input_sizes' and 'execution_times') for every given input size.

Upon completion of the loop, we possess the data needed to produce a plot. 'plt.plot(input_sizes, execution_times)' generates a basic line plot using the data we gathered. The x−axis displays the 'input_sizes' values, which represent the different input sizes.

'plt.xlabel()' and 'plt.ylabel()' are used in the end to label the axes indicating their meanings respectively, while calling on the ‘plt.show()’ function enables us to present the graph.

By running this code, we can visualize how the execution time rises with larger input sizes ('n') through a plotted graph. Assuming that an algorithm exhibits a time complexity of O(n), we can approximate that there will be an almost straight−line correlation between input size and execution duration when graphed.

Approach 2: Plotting Operations vs. Input Size

Example

import matplotlib.pyplot as plt

def algo_ops(n):
    ops = 0
    sum = 0
    for i in range(n):
        sum += i
        ops += 1
    ops += 1  # for the return statement
    return ops

input_sizes = []
operations = []

for i in range(1000, 11000, 1000):
    input_sizes.append(i)
    operations.append(algo_ops(i))

plt.plot(input_sizes, operations)
plt.xlabel

plt.xlabel('Input Size')
plt.ylabel('Number of Operations')
plt.show()

Output

This code is designed to analyze the number of operations performed by the `algo_ops ()` algorithm for different input sizes. Through the utilization of the `algo_ops()` function, it becomes possible to calculate the summation result comprising all numerical values that range from zero to a given input parameter 'n', concurrently tracking and recording each operation performed during these computations.

We start by importing the 'matplotlib.pyplot' module, which allows us to create visualizations like graphs.

Next, we define the algo_ops() function, which takes an input number 'n'. Inside the function, we initialize two variables: 'ops' to count the number of operations and 'sum' to store the cumulative sum of numbers.

These arrays will store the dimensions we desire to examine and their respective execution durations.

One way we utilize an iterative loop is to loop over within a set of multiple input scales. In this scenario, the loop executes ranging from 1000 to 10000 (except for 11000). This implies we will evaluate the technique for variable 'n' ranging from 1000 to 10000 with increments of 100.

Within the loop, we calculate the performance of the `algo_time()` procedure for all input sizes. We begin a stopwatch utilizing `time.time()` right before invoking the procedure and end it directly after the subroutine ends being executed. Next, we save the time gap in a variable referred to as 'execution_period'.

For every size of the input, we include the input's value ('n') towards the list named 'input_sizes'. Additionally, we append the corresponding processing time in the 'execution_times' collection.

After the loop completes, we have accumulated the essential data for making a diagram. The statement 'plt.plot(input_sizes, execution_times)' creates a basic line graph using the collected data. The values of 'input_sizes' are shown on the x−direction axis, representing diverse input magnitudes. The values of 'execution_times' are shown on the vertical axis, indicating the duration taken to perform the `algo_time()` function for varying input sizes.

Lastly, we mark the coordinate system through 'plt.xlabel()' and 'plt.ylabel()' to demonstrate the meaning of each axis. Next, 'plt.show() function is executed to present the graph.

Once we execute the program, the graph will display to us how the processing time rises when the input's magnitude ('n') grows.

Conclusion

In conclusion, mastering time complexity and visualization in Python using Matplotlib is a valuable skill for any programmer seeking to create efficient and optimal software solutions. Understanding how algorithms behave with varying input sizes equips us to tackle complex problems and build robust applications that deliver results in a timely and efficient manner.

Updated on: 27-Jul-2023

35 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements