

- Trending Categories
Data Structure
Networking
RDBMS
Operating System
Java
iOS
HTML
CSS
Android
Python
C Programming
C++
C#
MongoDB
MySQL
Javascript
PHP
- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who
How to use Boto3 library in Python to get the details of a crawler?
<p>Example: Get the details of a crawler, <strong>crawler_for_s3_file_</strong>job.</p><h2>Approach/Algorithm to solve this problem</h2><p><strong>Step 1</strong> − Import boto3 and botocore exceptions to handle exceptions.</p><p><strong>Step 2</strong> − crawler_name is the mandatory parameter. It is a list so user can send multiple crawlers name at a time to fetch details.</p><p><strong>Step 3</strong> − Create an AWS session using boto3 library. Make sure the <strong>region_name</strong> is mentioned in default profile. If it is not mentioned, then explicitly pass the <strong>region_name</strong> while creating the session.</p><p><strong>Step 4</strong> − Create an AWS client for glue.</p><p><strong>Step 5</strong> − Now use the <strong>batch_get_crawlers</strong> function and pass the <strong>crawler_names</strong>.</p><p><strong>Step 6</strong> − It returns the metadata of crawlers.</p><p><strong>Step 7</strong> − Handle the generic exception if something went wrong while checking the job.</p><h2>Example</h2><p>Use the following code to fetch the details of a crawler −</p><!--<p><a href="" target="_blank" rel="nofollow" class="demo"><i class="fa-external-link"></i> Live Demo</a></p>--><pre class="prettyprint notranslate">import boto3 from botocore.exceptions import ClientError def get_crawler_details(crawler_names:list) session = boto3.session.Session() glue_client = session.client('glue') try: crawler_details = glue_client.batch_get_crawlers(CrawlerNames= crawler_names) return crawler_details except ClientError as e: raise Exception( "boto3 client error in get_crawler_details: " + e.__str__()) except Exception as e: raise Exception( "Unexpected error in get_crawler_details: " + e.__str__()) print(get_crawler_details("[crawler_for_s3_file_job]"))</pre>
- Related Questions & Answers
- How to use Boto3 to get the details of a single crawler?
- How to use Boto3 to get the details of multiple triggers at a time?
- How to get the details of a workflow using Boto3
- How to use Boto3 to get the details of multiple glue jobs at a time?
- How to use Boto3 to get the details of a classifier from AWS Glue Data catalog?
- How to use Boto3 to get the details of a connection from AWS Glue Data catalog?
- How to use Boto3 to get the details of a database from AWS Glue Data Catalog?
- How to use Boto3 to get the metrics of one/manyspecified crawler from AWS Glue Data Catalog?
- How to use Boto3 library in Python to get the list of buckets present in AWS S3?
- How to use Boto3 to get the details of allsecurity configuration present in AWS Glue Security?
- How to use Boto3 to stop the scheduler of a crawler in AWS Glue Data Catalog
- How to use Boto3 to start the scheduler of a crawler in AWS Glue Data Catalog
- How to use Boto3 to update the scheduler of a crawler in AWS Glue Data Catalog
- How to use Boto3 to start a crawler in AWS Glue Data Catalog
- How to use Boto3 to stop a crawler in AWS Glue Data Catalog
Advertisements