Data Structure
Networking
RDBMS
Operating System
Java
MS Excel
iOS
HTML
CSS
Android
Python
C Programming
C++
C#
MongoDB
MySQL
Javascript
PHP
- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who
How to use Boto3 to paginate through multi-part upload objects of a S3 bucket present in AWS Glue
Problem Statement: Use boto3 library in Python to paginate through multi-part upload objects of a S3 bucket from AWS Glue Data Catalog that is created in your account
Approach/Algorithm to solve this problem
Step 1: Import boto3 and botocore exceptions to handle exceptions.
-
Step 2: prefix_name, max_items, page_size and starting_token is optional parameter for this function while bucket_name is required parameters.
Prefix_name is the specific sub folders where user wants to paginate through
max_items denote the total number of records to return. If the number of available records > max_items, then a NextToken will be provided in the response to resume pagination.
page_size denotes the size of each page.
starting_token helps to paginate, and it uses Marker from a previous response.
Step 3: Create an AWS session using boto3 lib. Make sure region_name is mentioned in the default profile. If it is not mentioned, then explicitly pass the region_name while creating the session.
Step 4: Create an AWS client for S3.
Step 5: Create a paginator object that contains details of object versions of a S3 bucket using list_multipart_uploads.
Step 6: Call the paginate function and pass the max_items, page_size and starting_token as PaginationConfig parameter while bucket_name as Bucket parameter and prefix_name as Prefix.
Step 7: It returns the number of records based on max_size and page_size.
Step 8: Handle the generic exception if something went wrong while paginating.
Example Code
Use the following code to paginate through multipart upload of a S3 bucket created in user account −
import boto3
from botocore.exceptions import ClientError
def paginate_through_multipart_upload_s3_bucket(bucket_name, prefix_name=None, max_items=None:int,page_size=None:int, starting_token=None:string):
session = boto3.session.Session()
s3_client = session.client('s3')
try:
paginator = s3_client.get_paginator('list_objects')
response = paginator.paginate(Bucket=bucket_name, Prefix=prefix_name, PaginationConfig={
'MaxItems':max_items,
'PageSize':page_size,
'StartingToken':starting_token}
)
return response
except ClientError as e:
raise Exception("boto3 client error in paginate_through_multipart_upload_s3_bucket: " + e.__str__())
except Exception as e:
raise Exception("Unexpected error in paginate_through_multipart_upload_s3_bucket: " + e.__str__())
a = paginate_through_multipart_upload_s3_bucket('s3-test-bucket', 'testfolder',2,5)
print(*a)
Output
{'ResponseMetadata': {'RequestId': 'YA9CGTAAX', 'HostId': '8dqJW******************', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amz-id-2': '8dqJW*********************, 'x-amz-request-id': 'YA9CGTAAX', 'date': 'Sat, 03 Apr 2021 08:16:05 GMT', 'content-type': 'application/xml', 'transfer-encoding': 'chunked', 'server': 'AmazonS3'}, 'RetryAttempts': 0}, 'Bucket': 's3-test-bucket', 'KeyMarker': '', 'UploadIdMarker': '', 'NextKeyMarker': '', 'Prefix': 'testfolder', 'NextUploadIdMarker': '', 'MaxUploads': 5, 'IsTruncated': False, 'Uploads': [
{'UploadId': 'YADF**************LK25',
'Key': 'testfolder/testfilemultiupload.csv',
'Intiated':datetime(2021,1,2),
'StorageClass': 'STANDARD'
'Owner':{
'DisplayName': 'AmazonServicesJob'
'Id': '********************'
}
], 'CommonPrefixes': None}