Python - URL Processing

In the world of Internet, different resources are identified by URLs (Uniform Resource Locators). The urllib package which is bundled with Python's standard library provides several utilities to handle URLs. It has the following modules −

  • urllib.parse module is used for parsing a URL into its parts.

  • urllib.request module contains functions for opening and reading URLs

  • urllib.error module carries definitions of the exceptions raised by urllib.request

  • urllib.robotparser module parses the robots.txt files

The urllib.parse M odule

This module serves as a standard interface to obtain various parts from a URL string. The module contains following functions −


Parse a URL into six components, returning a 6-item named tuple. Each tuple item is a string corresponding to following attributes −

Attribute Index Value
scheme 0 URL scheme specifier
netloc 1 Network location part
path 2 Hierarchical path
params 3 Parameters for last path element
query 4 Query component
fragment 5 Fragment identifier
username User name
password Password
hostname Host name (lower case)
Port Port number as integer, if present


from urllib.parse import urlparse
url = ">=25000"
parsed_url = urlparse(url)
print (type(parsed_url))
print ("Scheme:",parsed_url.scheme)
print ("netloc:", parsed_url.netloc)
print ("path:", parsed_url.path)
print ("params:", parsed_url.params)
print ("Query string:", parsed_url.query)
print ("Frgment:", parsed_url.fragment)

It will produce the following output

<class 'urllib.parse.ParseResult'>
Scheme: https
path: /employees/name/
Query string: salary>=25000


This function Parse a query string given as a string argument. Data is returned as a dictionary. The dictionary keys are the unique query variable names and the values are lists of values for each name.

To further fetch the query parameters from the query string into a dictionary, use parse_qs() function of the query attribute of ParseResult object as follows −

from urllib.parse import urlparse, parse_qs
url = ""
parsed_url = urlparse(url)
dct = parse_qs(parsed_url.query)
print ("Query parameters:", dct)

It will produce the following output

Query parameters: {'name': ['Anand'], 'salary': ['25000']}


This is similar to urlparse(), but does not split the params from the URL. This should generally be used instead of urlparse() if the more recent URL syntax allowing parameters to be applied to each segment of the path portion of the URL is wanted.


This function is the opposite of urlparse() function. It constructs a URL from a tuple as returned by urlparse(). The parts argument can be any six-item iterable. This returns an equivalent URL.


from urllib.parse import urlunparse

lst = ['https', '', '/employees/name/', '', 'salary>=25000', '']
new_url = urlunparse(lst)
print ("URL:", new_url)

It will produce the following output



Combine the elements of a tuple as returned by urlsplit() into a complete URL as a string. The parts argument can be any five-item iterable.

The urllib.request Module

This module defines functions and classes which help in opening URLs

urlopen() function

This function opens the given URL, which can be either a string or a Request object. The optional timeout parameter specifies a timeout in seconds for blocking operations This actually only works for HTTP, HTTPS and FTP connections.

This function always returns an object which can work as a context manager and has the properties url, headers, and status.

For HTTP and HTTPS URLs, this function returns a http.client.HTTPResponse object slightly modified.


The following code uses urlopen() function to read the binary data from an image file, and writes it to local file. You can open the image file on your computer using any image viewer.

from urllib.request import urlopen
obj = urlopen("")
data =
img = open("img.jpg", "wb")

It will produce the following output


The Request Object

The urllib.request module includes Request class. This class is an abstraction of a URL request. The constructor requires a mandatory string argument a valid URL.


urllib.request.Request(url, data, headers, origin_req_host, method=None)


  • url − A string that is a valid URL

  • data − An object specifying additional data to send to the server. This parameter can only be used with HTTP requests. Data may be bytes, file-like objects, and iterables of bytes-like objects.

  • headers − Should be a dictionary of headers and their associated values.

  • origin_req_host − Should be the request-host of the origin transaction

  • method − should be a string that indicates the HTTP request method. One of GET, POST, PUT, DELETE and other HTTP verbs. Default is GET.


from urllib.request import Request
obj = Request("")

This Request object can now be used as an argument to urlopen() method.

from urllib.request import Request, urlopen
obj = Request("")
resp = urlopen(obj)

The urlopen() function returns a HttpResponse object. Calling its read() method fetches the resource at the given URL.

from urllib.request import Request, urlopen
obj = Request("")
resp = urlopen(obj)
data =
print (data)

Sending Data

If you define data argument to the Request constructor, a POST request will be sent to the server. The data should be any object represented in bytes.


from urllib.request import Request, urlopen
from urllib.parse import urlencode

values = {'name': 'Madhu',
   'location': 'India',
   'language': 'Hindi' }
data = urlencode(values).encode('utf-8')
obj = Request("", data)

Sending Headers

The Request constructor also accepts header argument to push header information into the request. It should be in a dictionary object.

headers = {'User-Agent': user_agent}
obj = Request("", data, headers)

The urllib.error Module

Following exceptions are defined in urllib.error module −


URLError is raised because there is no network connection (no route to the specified server), or the specified server doesn't exist. In this case, the exception raised will have a 'reason' attribute.

from urllib.request import Request, urlopen
import urllib.error as err

obj = Request("")
except err.URLError as e:

It will produce the following output

HTTP Error 403: Forbidden


Every time the server sends a HTTP response it is associated with a numeric "status code". It code indicates why the server is unable to fulfil the request. The default handlers will handle some of these responses for you. For those it can't handle, urlopen() function raises an HTTPError. Typical examples of HTTPErrors are '404' (page not found), '403' (request forbidden), and '401' (authentication required).

from urllib.request import Request, urlopen
import urllib.error as err

obj = Request("")
except err.HTTPError as e:

It will produce the following output

Kickstart Your Career

Get certified by completing the course

Get Started