Get Confirmed, Recovered, Deaths cases of Corona around the globe using Python

The COVID-19 pandemic has impacted billions of lives worldwide, creating widespread concern among people. Several applications were built to track and analyze accurate information about deaths, recovered cases, and confirmed cases. Fetching and analyzing this information is crucial for developers building pandemic-related applications. In this article, we will explore three different methods to retrieve statistical data about COVID-19 cases using Python.

Method 1: Using APIs

APIs (Application Programming Interfaces) enable software applications to interact with each other by defining protocols for data exchange and functionality access. Web APIs, often based on HTTP, allow developers to access data and services over the internet by making HTTP requests to specific endpoints.

For our example, we will use the Disease.sh API that provides global COVID-19 statistics ?

import requests
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns

def fetch_data(url):
    response = requests.get(url)
    data = response.json()
    return data

def process_data(data):
    df = pd.DataFrame(data, index=[0])
    return df

def analyze_cases(df):
    confirmed_cases = df['cases'].iloc[0]
    recovered_cases = df['recovered'].iloc[0]
    death_cases = df['deaths'].iloc[0]
    return confirmed_cases, recovered_cases, death_cases

def visualize_data(confirmed_cases, recovered_cases, death_cases):
    labels = ['Confirmed', 'Recovered', 'Deaths']
    values = [confirmed_cases, recovered_cases, death_cases]
    
    plt.figure(figsize=(8, 6))
    sns.barplot(x=labels, y=values)
    plt.xlabel("Cases")
    plt.ylabel("Count")
    plt.title("Global COVID-19 Cases")
    plt.show()

def main():
    url = "https://disease.sh/v3/covid-19/all"  
    data = fetch_data(url)
    df = process_data(data)
    confirmed_cases, recovered_cases, death_cases = analyze_cases(df)
    
    print("Global COVID-19 Cases:")
    print(f"Confirmed cases: {confirmed_cases:,}")
    print(f"Recovered cases: {recovered_cases:,}")
    print(f"Death cases: {death_cases:,}")
    
    visualize_data(confirmed_cases, recovered_cases, death_cases)

if __name__ == '__main__':
    main()

The output displays global COVID-19 statistics ?

Global COVID-19 Cases:
Confirmed cases: 690,148,376
Recovered cases: 662,646,473
Death cases: 6,890,206

Method 2: Using Web Scraping with BeautifulSoup

BeautifulSoup is a popular Python library for web scraping that helps parse HTML and XML documents. Combined with the requests library for HTTP interactions, it provides a powerful method to extract data from web pages.

import requests
from bs4 import BeautifulSoup
import matplotlib.pyplot as plt
import seaborn as sns

def fetch_data(url):
    response = requests.get(url)
    soup = BeautifulSoup(response.content, 'html.parser')
    return soup

def extract_cases(soup):
    # Extract data from specific HTML elements
    confirmed_cases = int(soup.find('div', class_='maincounter-number').span.text.replace(',', ''))
    recovered_cases = int(soup.find_all('div', class_='maincounter-number')[2].span.text.replace(',', ''))
    death_cases = int(soup.find_all('div', class_='maincounter-number')[1].span.text.replace(',', ''))
    return confirmed_cases, recovered_cases, death_cases

def visualize_data(confirmed_cases, recovered_cases, death_cases):
    labels = ['Confirmed', 'Recovered', 'Deaths']
    values = [confirmed_cases, recovered_cases, death_cases]
    
    plt.figure(figsize=(8, 6))
    sns.barplot(x=labels, y=values)
    plt.xlabel("Cases")
    plt.ylabel("Count")
    plt.title("Global COVID-19 Cases")
    plt.show()

def main():
    url = "https://www.worldometers.info/coronavirus/" 
    soup = fetch_data(url)
    confirmed_cases, recovered_cases, death_cases = extract_cases(soup)
    
    print("Global COVID-19 Cases:")
    print(f"Confirmed cases: {confirmed_cases:,}")
    print(f"Recovered cases: {recovered_cases:,}")
    print(f"Death cases: {death_cases:,}")
    
    visualize_data(confirmed_cases, recovered_cases, death_cases)

if __name__ == '__main__':
    main()
Global COVID-19 Cases:
Confirmed cases: 690,148,376
Recovered cases: 662,646,473
Death cases: 6,890,206

Method 3: Using Web Scraping with Selenium

Selenium is a powerful library for automating web browsers, allowing programmatic control of browsers like Chrome, Firefox, Safari, and Edge. It works through WebDriver, which bridges the Selenium library and the browser. This method is useful for dynamic websites that load content via JavaScript.

from selenium import webdriver
from bs4 import BeautifulSoup
import matplotlib.pyplot as plt
import seaborn as sns

def get_html(url):
    driver = webdriver.Chrome()
    driver.get(url)
    html = driver.page_source
    driver.quit()
    return html

def extract_cases(soup):
    confirmed_cases = int(soup.find('div', class_='maincounter-number').span.text.replace(',', ''))
    recovered_cases = int(soup.find_all('div', class_='maincounter-number')[2].span.text.replace(',', ''))
    death_cases = int(soup.find_all('div', class_='maincounter-number')[1].span.text.replace(',', ''))
    return confirmed_cases, recovered_cases, death_cases

def visualize_data(confirmed_cases, recovered_cases, death_cases):
    labels = ['Confirmed', 'Recovered', 'Deaths']
    values = [confirmed_cases, recovered_cases, death_cases]
    
    plt.figure(figsize=(8, 6))
    sns.barplot(x=labels, y=values)
    plt.xlabel("Cases")
    plt.ylabel("Count")
    plt.title("Global COVID-19 Cases")
    plt.show()

def main():
    url = "https://www.worldometers.info/coronavirus/"
    html = get_html(url)
    soup = BeautifulSoup(html, 'html.parser')
    confirmed_cases, recovered_cases, death_cases = extract_cases(soup)
    
    print("Global COVID-19 Cases:")
    print(f"Confirmed cases: {confirmed_cases:,}")
    print(f"Recovered cases: {recovered_cases:,}")
    print(f"Death cases: {death_cases:,}")
    
    visualize_data(confirmed_cases, recovered_cases, death_cases)

if __name__ == '__main__':
    main()
Global COVID-19 Cases:
Confirmed cases: 690,148,376
Recovered cases: 662,646,473
Death cases: 6,890,206

Comparison of Methods

Method Best For Pros Cons
API Real-time data Fast, reliable, structured data Dependent on API availability
BeautifulSoup Static websites Simple, lightweight Cannot handle JavaScript
Selenium Dynamic websites Handles JavaScript, full browser control Slower, resource-intensive

Conclusion

Python offers multiple approaches for retrieving COVID-19 statistical data. APIs provide the most reliable method for structured data, while web scraping with BeautifulSoup works well for static content, and Selenium handles dynamic websites effectively. Choose the method based on your specific requirements and data source characteristics.

Updated on: 2026-03-27T08:21:41+05:30

169 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements