How to use Boto3 to stop a crawler in AWS Glue Data Catalog

AWSBoto3PythonServer Side ProgrammingProgramming

In this article, we will see how a user can stop a crawler present in an AWS Glue Data Catalog.

Example

Problem Statement: Use boto3 library in Python to stop a crawler.

Approach/Algorithm to solve this problem

  • Step 1: Import boto3 and botocore exceptions to handle exceptions.

  • Step 2: crawler_name is the parameter in this function.

  • Step 3: Create an AWS session using boto3 lib. Make sure region_name is mentioned in the default profile. If it is not mentioned, then explicitly pass the region_name while creating the session.

  • Step 4: Create an AWS client for glue.

  • Step 5: Now use the stop_crawler function and pass the parameter crawler_name as Name.

  • Step 6: It returns the response metadata and stop the crawler if it is running; else it throws the exception – CrawlerNotRunningException.

  • Step 7: Handle the generic exception if something went wrong while stopping a crawler.

Example Code

The following code stops a crawler −

import boto3
from botocore.exceptions import ClientError

def stop_a_crawler(crawler_name)
   session = boto3.session.Session()
   glue_client = session.client('glue')
   try:
      response = glue_client.stop_crawler(Name=crawler_name)
      return response
   except ClientError as e:
      raise Exception("boto3 client error in stop_a_crawler: " + e.__str__())
   except Exception as e:
      raise Exception("Unexpected error in stop_a_crawler: " + e.__str__())
print(stop_a_crawler("Data Dimension"))

Output

{'ResponseMetadata': {'RequestId': '73e50130-*****************8e', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Sun, 28 Mar 2021 07:26:55 GMT', 'content-type': 'application/x-amz-json-1.1', 'content-length': '2', 'connection': 'keep-alive', 'x-amzn-requestid': '73e50130-***************8e'}, 'RetryAttempts': 0}}
raja
Published on 15-Apr-2021 13:00:57
Advertisements