How to use Boto3 library in Python to get the details of a crawler?

Boto3PythonServer Side ProgrammingProgramming

<p>Example: Get the details of a crawler, <strong>crawler_for_s3_file_</strong>job.</p><h2>Approach/Algorithm to solve this problem</h2><p><strong>Step 1</strong> &minus; Import boto3 and botocore exceptions to handle exceptions.</p><p><strong>Step 2</strong> &minus; crawler_name is the mandatory parameter. It is a list so user can send multiple crawlers name at a time to fetch details.</p><p><strong>Step 3</strong> &minus; Create an AWS session using boto3 library. Make sure the <strong>region_name</strong> is mentioned in default profile. If it is not mentioned, then explicitly pass the <strong>region_name</strong> while creating the session.</p><p><strong>Step 4</strong> &minus; Create an AWS client for glue.</p><p><strong>Step 5</strong> &minus; Now use the <strong>batch_get_crawlers</strong> function and pass the <strong>crawler_names</strong>.</p><p><strong>Step 6</strong> &minus; It returns the metadata of crawlers.</p><p><strong>Step 7</strong> &minus; Handle the generic exception if something went wrong while checking the job.</p><h2>Example</h2><p>Use the following code to fetch the details of a crawler &minus;</p><!--<p><a href="" target="_blank" rel="nofollow" class="demo"><i class="fa-external-link"></i> Live Demo</a></p>--><pre class="prettyprint notranslate">import boto3 from botocore.exceptions import ClientError def get_crawler_details(crawler_names:list) &nbsp; &nbsp;session = boto3.session.Session() &nbsp; &nbsp;glue_client = session.client(&#39;glue&#39;) &nbsp; &nbsp;try: &nbsp; &nbsp; &nbsp; crawler_details = glue_client.batch_get_crawlers(CrawlerNames= crawler_names) &nbsp; &nbsp; &nbsp; return crawler_details &nbsp; &nbsp;except ClientError as e: &nbsp; &nbsp; &nbsp; raise Exception( &quot;boto3 client error in get_crawler_details: &quot; + e.__str__()) &nbsp; &nbsp;except Exception as e: &nbsp; &nbsp; &nbsp; raise Exception( &quot;Unexpected error in get_crawler_details: &quot; + e.__str__()) print(get_crawler_details(&quot;[crawler_for_s3_file_job]&quot;))</pre>
Updated on 22-Mar-2021 08:24:08