
Data Structure
Networking
RDBMS
Operating System
Java
MS Excel
iOS
HTML
CSS
Android
Python
C Programming
C++
C#
MongoDB
MySQL
Javascript
PHP
- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who
Found 10476 Articles for Python

973 Views
Using the Scharr operator, we can compute image gradients in horizontal as well as vertical direction using first order derivatives. The gradients are computed for a grayscale image. You can apply Scharr operation on an image using the method cv2.scharr(). Syntax The following syntax is used to compute the image gradients using Scharr derivative − cv2.Scharr(img, ddepth, xorder, yorder) Parameters img − The original input image ddepth − Desired depth of the output image. It has information about what kind of data is stored in the output image. We use cv2.CV_64F to as ddepth. It is a 64bit ... Read More

5K+ Views
To draw polylines on an image, we use the method cv2.polylines(). We can draw open or closed polylines on the image. The first and last points are not connected while drawing an open polyline. Syntax The syntax of cv2.polylines() is as follows − cv2.polylines(src, [pts], isClosed, color, thickness) Parameters src − It's the input image on which the polylines to be drawn. pts − List of the array of pints. isClosed − Set isClosed=True to draw a closed polyline, for an open polyline set isClosed=False. color − It is the color of the line. thickness − Its thickness ... Read More

2K+ Views
Using the Sobel operator, we can compute image gradients in horizontal as well as vertical direction. The gradients are computed for a grayscale image. The Laplacian operator computes the gradients using the second-order derivatives. Syntax The following syntaxes are used to compute the image gradients using Sobel and Laplacian derivatives − cv2.Sobel(img, ddepth, xorder, yorder, ksize) cv2.Laplacian(img, ddepth) Parameters img − The original input image. ddepth − Desired depth of the output image. It has information about what kind of data is stored in the output image. We use cv2.CV_64F to as ddepth. It is a 64bit floating-point ... Read More

949 Views
The morphological gradient is computed as the difference between the dilation and erosion of an image. We use cv2.morphologyEx() method to compute the morphological gradients. Morphological gradient is used in segmentation, edge detection and to find the outline of an object. Syntax Here is the syntax used for this method − cv2.morphologyEx(img, op, kernel) Where, img − The original input image. op − Type of morphological operation. We use cv2.MORPH_GRADIENT. kernel − The kernel. We can define the kernel as a numpy matrix of all ones of dtye uint8. Steps You can use the following steps to ... Read More

2K+ Views
In this tutorial, we will see how to apply two different low-pass filters to smooth (remove noise from) the image. The two filters are filter2D and boxFilter. These filters are 2D filters in space. Applying 2D filters to images is also known as the "2D Convolution operation". These filters are commonly referred to as averaging filters. The main disadvantage of these filters is that they also smooth the edges in the image. If you don't want to smooth the edges, you can apply a "bilateral filter". A bilateral filter operation preserves the edges.Syntax Following are the syntaxes of Filter2D and ... Read More

6K+ Views
In Perspective Transformation, the straight lines remain straight even after the transformation. To apply a perspective transformation, we need a 3×3 perspective transformation matrix. We need four points on the input image and corresponding four points on the output image. We apply the cv2.getPerspectiveTransform() method to find the transformation matrix. Its syntax is as follows − M = cv2.getPerspectiveTransform(pts1, pts2) where, pts1 − An array of four points on the input image and pts2 − An array of corresponding four points on the output image. The Perspective Transformation matrix M is a numpy array. We pass M ... Read More

4K+ Views
A color (RGB) image has three channels, Red, Blue and Green. A colored image in OpenCV has a shape in [H, W, C] format, where H, W, and C are image height, width and number of channels. All three channels have a value range between 0 and 255. An HLS image also has three channels, the Hue, Lightness and Saturation channels. In OpenCV, the values of the Hue channel range from 0 to 179 whereas the Lightness and Saturation channels range from 0 to 255. In OpenCV, the color image loaded using cv2.imread() function is always in BGR format. To ... Read More

3K+ Views
OpenCV provides the function cv2.arrowedLine() to draw an arrowed line on an image. This function takes different arguments to draw the line. See the syntax below. cv2.arrowedLine(img, start, end, color, thickness, line_type, shift, tip_length) img − The input image on which the line is to be drawn. Start − Start coordinate of the line in (width, height) format. End − End coordinate of the line in (width, height) format. Color − Color of the line. For a red color in BGR format we pass (0, 0, 255) Thickness − Thickness of the line in pixels. line_type − Type ... Read More

4K+ Views
In Affine Transformation, all parallel lines in the original image will still be parallel in the output image. To apply affine transformation on an image, we need three points on the input image and corresponding point on the output image. So first, we define these points and pass to the function cv2.getAffineTransform(). It will create a 2×3 matrix, we term it a transformation matrix M. We can find the transformation matrix M using the following syntax − M = cv2.getAffineTransform(pts1, pts2) Where pts1 is an array of three points on the input image and pts2 is an array of ... Read More

3K+ Views
A very important application of bitwise AND operation in computer vision or image processing is for creating masks of the image. We can also use the operator to add watermarks to an image. The pixel values of an image are represented as a numpy ndarray. The pixel values are stored using 8-bit unsigned integers (uint8) in a range from 0 to 255. The bitwise AND operation between two images is performed on the binary representation of these pixel values of corresponding images. Given below is the syntax to perform bitwise AND operation on two images − cv2.bitwise_and(img1, img2, mask=None) ... Read More