
- Computer Vision - Home
- Computer Vision - Introduction
- Computer Vision - Fundamentals of Image Processing
- Computer Vision - Image Segmentation
- Computer Vision - Image Preprocessing Techniques
- Computer Vision - Feature Detection and Extraction
- Computer Vision - Object Detection
- Computer Vision - Image Classification
- Computer Vision - Image Recognition and Matching
- Computer Vision Useful Resources
- Computer Vision - Useful Resources
- Computer Vision - Discussion
Computer Vision - Feature Detection & Extraction
What is Feature Detection and Extraction?
Feature detection is the process of identifying specific points or patterns in an image that have distinctive characteristics.
Feature extraction involves describing these detected features in a way that can be used for various computer vision tasks.
Together, these steps allow computers to understand and interpret visual data more effectively.
Types of Features
We can categorize features into three different types as shown below −
- Edges: These are boundaries where the image brightness changes sharply. They represent the contours of objects within an image.
- Corners: Points where two or more edges meet. They are often found at the intersections of different shapes or objects.
- Blobs: Regions in the image that differ in properties, such as brightness or color, compared to surrounding regions.
Following are some frequently used techniques for feature detection and extraction −
- Edge Detection
- Corner Detection
- Blob Detection
- Feature Descriptors
Edge Detection
Edge detection identifies points in an image where the brightness changes sharply. This helps in highlighting the boundaries of objects within the image.
The common edge detection methods are as shown below −
- Sobel Operator: Computes the gradient of the image intensity at each pixel, resulting in edges being highlighted.
import cv2 import numpy as np # Load image in grayscale image = cv2.imread('image.jpg', cv2.IMREAD_GRAYSCALE) # Apply Sobel operator sobel_x = cv2.Sobel(image, cv2.CV_64F, 1, 0, ksize=5) sobel_y = cv2.Sobel(image, cv2.CV_64F, 0, 1, ksize=5) sobel_combined = cv2.sqrt(sobel_x**2 + sobel_y**2)
edges = cv2.Canny(image, 100, 200)
Corner Detection
Corner detection identifies points in an image where two or more edges intersect. Corners are useful for tracking and matching objects between different images.
The common corner detection methods are as shown below −
- Harris Corner Detector: Uses the differential of the corner score with respect to direction to find corners.
corners = cv2.cornerHarris(image, 2, 3, 0.04)
corners = cv2.goodFeaturesToTrack(image, 100, 0.01, 10) corners = np.int0(corners) for corner in corners: x, y = corner.ravel() cv2.circle(image, (x, y), 3, 255, -1)
Blob Detection
Blob detection identifies regions in an image that are significantly different in properties compared to surrounding regions. Blobs can represent objects, shapes, or areas of interest.
The common blob detection methods are as shown below −
- Laplacian of Gaussian (LoG): Detects blobs by finding zero-crossings in the Laplacian of the image.
- Difference of Gaussians (DoG): Approximation of LoG, detects blobs by subtracting two blurred images.
- SimpleBlobDetector: A built-in method in OpenCV that detects blobs based on parameters like area, circularity, and convexity.
params = cv2.SimpleBlobDetector_Params() detector = cv2.SimpleBlobDetector_create(params) keypoints = detector.detect(image) im_with_keypoints = cv2.drawKeypoints(image, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
Feature Descriptors
Once features are detected, feature descriptors describe these features in a way that can be used for matching or recognition. Descriptors encode the appearance and shape of features into numerical vectors.
The common feature descriptors are as shown below −
- Scale-Invariant Feature Transform (SIFT): Detects and describes local features in images, invariant to scale and rotation.
sift = cv2.SIFT_create() keypoints, descriptors = sift.detectAndCompute(image, None)
surf = cv2.SURF_create() keypoints, descriptors = surf.detectAndCompute(image, None)
orb = cv2.ORB_create() keypoints, descriptors = orb.detectAndCompute(image, None)
Applications of Feature Detection and Extraction
Feature detection and extraction are used in various computer vision applications, such as −
- Object Recognition: Identifying and classifying objects within an image.
- Image Matching: Finding similar images or matching features between different images.
- Motion Tracking: Tracking the movement of objects across a sequence of images.
- 3D Reconstruction: Building a 3D model from multiple 2D images.