How to get the Daily News using Python?

Daily News is updated content released every day, covering global events across politics, sports, entertainment, science, and technology. Python provides powerful tools to gather and process news articles from multiple sources automatically.

Python's requests and Beautiful Soup libraries enable web scraping to extract news headlines and content from news websites. This approach allows you to create automated daily news summaries programmatically.

Note ? Output may vary as daily news content is constantly updated.

Required Libraries

Install the necessary packages using pip ?

pip install requests beautifulsoup4

Library Overview

requests ? Makes HTTP requests to fetch web page content

BeautifulSoup ? Parses HTML and extracts specific elements like headlines

Method 1: Basic News Headline Extraction

This method scrapes all headlines from BBC News website ?

import requests
from bs4 import BeautifulSoup

url = 'https://www.bbc.com/news'
response = requests.get(url)

soup = BeautifulSoup(response.text, 'html.parser')
headlines = soup.find('body').find_all('h3')

print("BBC News Headlines:")
print("-" * 30)
for x in headlines:
    print(x.text.strip())
BBC News Headlines:
------------------------------
Ukraine war: Latest developments
Tech giant reports record profits
Climate summit reaches agreement
Sports: Championship results
Breaking: Economic policy changes

Method 2: Filtered News Headlines

Remove unwanted navigation elements and duplicate content ?

import requests
from bs4 import BeautifulSoup

url = 'https://www.bbc.com/news'
response = requests.get(url)

soup = BeautifulSoup(response.text, 'html.parser')
headlines = soup.find('body').find_all('h3')

# Filter out navigation and non-news content
unwanted = [
    'BBC World News TV', 
    'BBC World Service Radio', 
    'News daily newsletter', 
    'Mobile app', 
    'Get in touch'
]

print("Filtered BBC News Headlines:")
print("-" * 35)
for x in headlines:
    headline_text = x.text.strip()
    if headline_text and headline_text not in unwanted:
        print(f"? {headline_text}")
Filtered BBC News Headlines:
-----------------------------------
? Breaking: Market volatility continues
? International summit concludes
? New technology breakthrough announced
? Sports update: Championship finals
? Weather warning issued nationwide

Method 3: Enhanced News Scraper

Add error handling and multiple news sources ?

import requests
from bs4 import BeautifulSoup

def get_news_headlines(url, source_name):
    try:
        response = requests.get(url, timeout=10)
        response.raise_for_status()
        
        soup = BeautifulSoup(response.text, 'html.parser')
        headlines = soup.find_all(['h1', 'h2', 'h3'])
        
        unwanted = [
            'BBC World News TV', 'BBC World Service Radio', 
            'News daily newsletter', 'Mobile app', 'Get in touch',
            'Subscribe', 'Follow us', 'Contact'
        ]
        
        clean_headlines = []
        for headline in headlines[:10]:  # Limit to first 10
            text = headline.text.strip()
            if text and len(text) > 10 and text not in unwanted:
                clean_headlines.append(text)
        
        return clean_headlines[:5]  # Return top 5 headlines
    
    except requests.RequestException as e:
        print(f"Error fetching {source_name}: {e}")
        return []

# Get news from BBC
bbc_headlines = get_news_headlines('https://www.bbc.com/news', 'BBC News')

print("Top BBC News Headlines:")
print("=" * 30)
for i, headline in enumerate(bbc_headlines, 1):
    print(f"{i}. {headline}")
Top BBC News Headlines:
==============================
1. Global summit addresses climate change
2. Economic indicators show recovery signs
3. Technology sector sees major investment
4. International trade agreements signed
5. Healthcare breakthrough announced

Key Considerations

Aspect Recommendation
Rate Limiting Add delays between requests to avoid overwhelming servers
Error Handling Use try-except blocks for network issues
Legal Compliance Check website terms of service before scraping
Data Storage Store results in files or databases for later use

Conclusion

Python's requests and BeautifulSoup libraries provide an effective way to scrape daily news headlines from websites. Always respect website terms of service and implement proper error handling for robust news gathering applications.

Updated on: 2026-03-27T09:33:43+05:30

925 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements