How to get the list of all crawlers present in an AWS account using Boto3

In this article, we will see how a user can get the list of all crawlers present in an AWS account.


Problem Statement: Use boto3 library in Python to get the list of all crawlers.

Approach/Algorithm to solve this problem

  • Step 1: Import boto3 and botocore exceptions to handle exceptions.

  • Step 2: There are no parameters in this function.

  • Step 3: Create an AWS session using boto3 lib. Make sure region_name is mentioned in the default profile. If it is not mentioned, then explicitly pass the region_name while creating the session.

  • Step 4: Create an AWS client for glue.

  • Step 5: Now use the list_crawlers

  • Step 6: It returns list of all crawlers present in the AWS Glue data catalog.

  • Step 7: Handle the generic exception if something went wrong while checking the job.

Example Code

The following code fetches the list of all crawlers −

import boto3
from botocore.exceptions import ClientError

def list_of_crawlers()
   session = boto3.session.Session()
   glue_client = session.client('glue')
      crawler_details = glue_client.list_crawlers()
      return crawler_details
   except ClientError as e:
      raise Exception("boto3 client error in list_of_crawlers: " + e.__str__())
   except Exception as e:
      raise Exception("Unexpected error in list_of_crawlers: " + e.__str__())


{'CrawlerNames': ['crawler_for_s3_file_job', 'crawler_for_employee_data', 'crawler_for_security_data'], 'ResponseMetadata': {'RequestId': 'a498ba4a-7ba4-47d3-ad81-d86287829c1d', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Sat, 13 Feb 2021 14:04:03 GMT', 'content-type': 'application/x-amz-json-1.1', 'content-length': '830', 'connection': 'keep-alive', 'x-amzn-requestid': 'a498ba4a-7ba4-47d3-ad81-d86287829c1d'}, 'RetryAttempts': 0}}