How to use Boto3 to paginate through the job runs of a job present in AWS Glue

AWSBoto3PythonServer Side ProgrammingProgramming

In this article, we will see how to paginate through all the job runs of a job present in AWS Glue.


Problem Statement: Use boto3 library in Python to paginate through job runs of a job from AWS Glue Data Catalog that is created in your account

Approach/Algorithm to solve this problem

  • Step 1: Import boto3 and botocore exceptions to handle exceptions.

  • Step 2: max_items, page_size and starting_token are the optional parameters for this function, while job_name is required.

    • max_items denote the total number of records to return. If the number of available records > max_items then a NextToken will be provided in the response to resume pagination.

    • page_size denotes the size of each page.

    • starting_token helps to paginate, and it uses NextToken from a previous response.

  • Step 3: Create an AWS session using boto3 lib. Make sure region_name is mentioned in the default profile. If it is not mentioned, then explicitly pass the region_name while creating the session.

  • Step 4: Create an AWS client for glue.

  • Step 5: Create a paginator object that contains details of all crawlers using get_job_runs

  • Step 6: Call the paginate function and pass the max_items, page_size and starting_token as PaginationConfig parameter.

  • Step 7: It returns the number of records based on max_size and page_size.

  • Step 8: Handle the generic exception if something went wrong while paginating.

Example Code

Use the following code to paginate through all job runs of a job created in user account −

import boto3
from botocore.exceptions import ClientError

def paginate_through_jobruns(job_name,max_items=None:int,page_size=None:int, starting_token=None:string):
   session = boto3.session.Session()
   glue_client = session.client('glue')
   paginator = glue_client.get_paginator('get_job_runs')
      response = paginator.paginate(JobName=job_name, PaginationConfig={
   return response
   except ClientError as e:
      raise Exception("boto3 client error in paginate_through_jobruns: " + e.__str__())
   except Exception as e:
      raise Exception("Unexpected error in paginate_through_jobruns: " + e.__str__())
a = paginate_through_crawlers("glue_test_job",1,5)


{'JobRuns': [
{'Id': 'jr_435b66cfe451adf5fa7c7f914be3c87d199616f52bd13bdd91bb1269f02db705', 'Attempt': 0, 'JobName': ' glue_test_job, 'StartedOn': datetime.datetime(2021, 1, 25, 22, 19, 56, 52000, tzinfo=tzlocal()), 'LastModifiedOn': datetime.datetime(2021, 1, 25, 22, 21, 50, 603000, tzinfo=tzlocal()), 'CompletedOn': datetime.datetime(2021, 1, 25, 22, 21, 50, 603000, tzinfo=tzlocal()), 'JobRunState': 'SUCCEEDED', 'Arguments': {'--additional-python-modules': 'pandas==1.1.5', '--enable-glue-datacatalog': 'true', '--extra-files': 's3://glue/job/test, '--job-bookmark-option': 'job-bookmark-disable', 'step': '0'}, 'PredecessorRuns': [], 'AllocatedCapacity': 2, 'ExecutionTime': 107, 'Timeout': 2880, 'MaxCapacity': 2.0, 'WorkerType': 'G.1X', 'NumberOfWorkers': 2, 'LogGroupName': '/aws-glue/jobs', 'GlueVersion': '2.0'}],
'NextToken': 'eyJleHBpcmF0aW9uIjp7InNlY29uZHMiOjE2MTc0NTQ0NDgsIm5hbm9zIjo2OTUwMDAwMDB9LCJsYXN0RXZhbHVhdGVkS2V5Ijp7ImpvYklkIjp7InMiOiJqXzdlYzIzNTYwOWRkMGVmYjRhNTgyNDU2YWVlZmQ4NmFlMTgwYTAyNDQ3NWY2ODRkMzc4YWFiZDBmYTk1MGJmMDcifSwicnVuSWQiOnsicyI6ImpyXzJjNDFkMmJmMzY1NGZhZGFhYzkzMjU1ZTY0OTkxOTg2YTE0Yjk2MjIyMTRlNDc4ZGNkOWE0ZTY5N2M3MGZmY2YifSwic3RhcnRlZE9uIjp7Im4iOiIxNjExMjA3MjcwODIyIn19fQ==',
'ResponseMetadata': {'RequestId': '1874370e-***********-40d', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Fri, 02 Apr 2021 12:54:08 GMT', 'content-type': 'application/x-amz-json-1.1', 'content-length': '6509', 'connection': 'keep-alive', 'x-amzn-requestid': '1874370e-**************40d'}, 'RetryAttempts': 0}}
Published on 15-Apr-2021 13:16:15