urllib.robotparser - Parser for robots.txt in Python


Web site owners use the /robots.txt file to give instructions about their site to web robots; this is called The Robots Exclusion Protocol. This file is a simple text-based access control system for computer programs that automatically access web resources. Such programs are called spiders, crawlers, etc. The file specifies the user agent identifier followed by a list of URLs the agent may not access.

For example

#robots.txt
Sitemap: https://example.com/sitemap.xml
User-agent: *
Disallow: /admin/
Disallow: /downloads/
Disallow: /media/
Disallow: /static/

This file is usually put in the top-level directory of your web server.

Python's urllib.robotparser module provides RobotFileParser class. It answers questions about whether or not a particular user agent can fetch a URL on the Web site that published the robots.txt file.

Following method are defined in RobotFileParser class

set_url(url)

This method sets the URL referring to a robots.txt file.

read()

This method reads the robots.txt URL and feeds it to the parser.

parse()

This method parses the lines argument.

can_fetch()

This method returns True if the useragent is able to fetch the url according to the rules contained in robots.txt.

mtime()

This method returns the time the robots.txt file was last fetched.

modified()

This method sets the time robots.txt was last fetched.

crawl_delay()

This method returns the value of the Crawl-delay parameter robots.txt for the useragent in question.

request_rate()

This method returns the contents of the Request-rate parameter as a named tuple RequestRate(requests, seconds).

Example

from urllib import parse
from urllib import robotparser
AGENT_NAME = 'PyMOTW'
URL_BASE = 'https://example.com/'
parser = robotparser.RobotFileParser()
parser.set_url(parse.urljoin(URL_BASE, 'robots.txt'))
parser.read()

Updated on: 30-Jul-2019

472 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements