Web site owners use the /robots.txt file to give instructions about their site to web robots; this is called The Robots Exclusion Protocol. This file is a simple text-based access control system for computer programs that automatically access web resources. Such programs are called spiders, crawlers, etc. The file specifies the user agent identifier followed by a list of URLs the agent may not access.
#robots.txt Sitemap: https://example.com/sitemap.xml User-agent: * Disallow: /admin/ Disallow: /downloads/ Disallow: /media/ Disallow: /static/
This file is usually put in the top-level directory of your web server.
Python's urllib.robotparser module provides RobotFileParser class. It answers questions about whether or not a particular user agent can fetch a URL on the Web site that published the robots.txt file.
Following method are defined in RobotFileParser class
This method sets the URL referring to a robots.txt file.
This method reads the robots.txt URL and feeds it to the parser.
This method parses the lines argument.
This method returns True if the useragent is able to fetch the url according to the rules contained in robots.txt.
This method returns the time the robots.txt file was last fetched.
This method sets the time robots.txt was last fetched.
This method returns the value of the Crawl-delay parameter robots.txt for the useragent in question.
This method returns the contents of the Request-rate parameter as a named tuple RequestRate(requests, seconds).
from urllib import parse from urllib import robotparser AGENT_NAME = 'PyMOTW' URL_BASE = 'https://example.com/' parser = robotparser.RobotFileParser() parser.set_url(parse.urljoin(URL_BASE, 'robots.txt')) parser.read()