Tutorialspoint

April Learning Carnival is here, Use code FEST10 for an extra 10% off

Computer Vision: Python OCR & Object Detection Quick Starter

person icon Abhilash Nelson

4.1

Computer Vision: Python OCR & Object Detection Quick Starter

Quick Starter for Optical Character Recognition, Image Recognition Object Detection and Object Recognition using Python

updated on icon Updated on Apr, 2024

language icon Language - English

person icon Abhilash Nelson

category icon Development,Data Science,Computer Vision

Lectures -50

Resources -5

Duration -4.5 hours

4.1

price-loader

30-days Money-Back Guarantee

Training 5 or more people ?

Get your team access to 10000+ top Tutorials Point courses anytime, anywhere.

Course Description

Hi There!

welcome to my new course 'Optical Character Recognition and Object Recognition Quick Start with Python'. This is the third course from my Computer Vision series.

Image Recognition, Object Detection, Object Recognition and also Optical Character Recognition are among the most used applications of Computer Vision. 

Using these techniques, the computer will be able to recognize and classify either the whole image, or multiple objects inside a single image predicting the class of the objects with the percentage accuracy score. Using OCR, it can also recognize and convert text in the images to machine readable format like text or a document.

Object Detection and Object Recognition is widely used in many simple applications and also complex ones like self driving cars.

This course will be a quick starter for people who wants to dive into Optical Character Recognition, Image Recognition and Object Detection using Python without having to deal with all the complexities and mathematics associated with typical Deep Learning process. 

Let's now see the list of interesting topics that are included in this course. 

At first we will have an introductory theory session about Optical Character Recognition technology. 

After that, we are ready to proceed with preparing our computer for python coding by downloading and installing the anaconda package and will check and see if everything is installed fine.

Most of you may not be coming from a python based programming background. The next few sessions and examples will help you get the basic python programming skill to proceed with the sessions included in this course. The topics include Python assignment, flow-control, functions and data structures. 

Then we will install the dependencies and libraries that we require to do the Optical Character Recognition. We are using Tesseract Library to do the OCR. At first we will install the Library and then its python bindings. We will also install OpenCV, which is the Open Source Computer Vision library in Python. 

We also will install the Pillow library, which is the Python Image Library. Then we will have an introduction to the steps involved in the Optical Character Recognition and later will proceed with coding and implementing the OCR program. We will use few example images to do a Character Recognition testing and will verify the results.

Then we will have an introduction to Convolutional Neural Networks , which we will be using to do the Image Recognition. Here we will be classifying a full image based on the single primary object in it. 

We will then proceed with installing the Keras Library which we will be using to do the Image recognition. We will be using the built in , pre-trained Models that are included in Keras. The base code in python is also provided in the Keras documentation. 

At first We will be using the popular pre-trained model architecture called the VGGNet. we will have an introductory session about the architecture of VGGNet. Then we will proceed with using the pre-trained VGGNet 16 Model included in keras to do Image Recognition and classification. We will try with few sample images to check the predictions. Then will move on to a deeper VGGNet 19 Model included in keras to do Image Recognition and classification. 

Then we will try the ResNet pre-trained model included with the Keras library. We will include the model in the code and then we will try with few sample images to check the predictions. 

And after that we will try the Inception pre-trained model. We will also include the model in the code and then we will try with few sample images to check the predictions. Then will go ahead with the Xception pre-trained model. Here also, we will  include the model in the code and then we will try with few sample images.

And those were Image Recognition pre-trained models, which can only label and classify a complete image based on the primary object in it. Now we will proceed with Object Recognition in which we can detect and label multiple objects in a single image.

At first we will have an introduction to MobileNet-SSD Pre-trained Model, which is single shot detector that is capable of detecting multiple objects in a scene. We will be also be having a quick discussion about the dataset that is used to train this model.

Later we will be implementing the MobileNet-SSD Pre-trained Model in our code and will get the predictions and bounding box coordinates for every object detected. We will draw the bounding box around the objects in the image and write the label along with the confidence value.

Then we will go ahead with object detection from a live video. We will be streaming the real-time live video from the computer's webcam and will try to detect objects from it. We will draw rectangle around each object detected in the live video along with the label and confidence.

In the next session, we will go ahead with object detection from a pre-saved video. We will be streaming the saved video from our folder and will try to detect objects from it. We will draw rectangle around each object detected along with the label and confidence.

Later we will be going ahead with the Mask-RCNN Pre-trained Model. In the previous model, we were only able to get a bounding box around the object, but in Mask-RCNN, we can get both the box co-ordinates as well the mask over the exact shape of object detected. We will have an introduction about this model and its details.

Later we will be implementing the Mask-RCNN Pre-trained Model in our code and as the first step we will get the predictions and bounding box coordinates for every object detected. We will draw the bounding box around the objects in the image and write the label along with the confidence value.

Later we will be getting the mask returned for each object predicted. We will process that data and use it to draw translucent multi coloured masks over each and every object detected and write the label along with the confidence value.

Then we will go ahead with object detection from a live video using Mask-RCNN. We will be streaming the real-time live video from the computer's webcam and will try to detect objects from it. We will draw the mask over the perimeter of each object detected in the live video along with the label and confidence.

And like we did for our previous model, we will go ahead with object detection from a pre-saved video using Mask-RCNN. We will be streaming the saved video from our folder and will try to detect objects from it. We will draw coloured masks for object detected along with the label and confidence.

The Mask-RCNN is very accurate with vast class list but will be very slow in processing images using low power CPU based computers. MobileNet-SSD is fast but less accurate and low in number of classes. We need a perfect blend of speed and accuracy which will take us to Object Detection and Recognition using YOLO pre-trained model. we will have an overview about the yolo model in the next session and then we will implement yolo object detection from a single image.

And using that as the base, we will try the yolo model for object detection from a real time webcam video and we will check the performance. Later we will use it for object recognition from the pre-saved video file.

To further improve the speed of frames processed, we will use the model called Tiny YOLO which is a light weight version of the actual yolo model. We will use tiny yolo at first for the pre-saved video and will analyse the accuracy as well as speed and then we will try the same for a real-time video from webcam and see the difference in performance compared to actual yolo.

That's all about the topics which are currently included in this quick course. The code, images and libraries used in this course has been uploaded and shared in a folder. I will include the link to download them in the last session or the resource section of this course. You are free to use the code in your projects with no questions asked.

Also after completing this course, you will be provided with a course completion certificate which will add value to your portfolio.

So that's all for now, see you soon in the class room. Happy learning and have a great time.

Goals

What will you learn in this course:

  • Optical Character Recognition with Tesseract Library, Image Recognition using Keras, Object Recognition using MobileNet SSD, Mask R-CNN, YOLO, Tiny YOLO from static image, realtime video and pre-recorded videos using Python

Prerequisites

What are the prerequisites for this course?

  • A decent configuration computer (preferably Windows) and an enthusiasm to dive into the world of OCR, Image and Object Recognition using Python
Computer Vision: Python OCR & Object Detection Quick Starter

Curriculum

Check out the detailed breakdown of what’s inside the course

Course Introduction and Table of Contents
2 Lectures
  • play icon Download Source Code And All Sample Files From Here
  • play icon Course Introduction and Table of Contents 09:40 09:40
Introduction to OCR Concepts and Libraries
1 Lectures
Tutorialspoint
Setting up Environment - Anaconda
1 Lectures
Tutorialspoint
Python Basics (Optional)
4 Lectures
Tutorialspoint
Tesseract OCR Setup
2 Lectures
Tutorialspoint
OpenCV Setup
1 Lectures
Tutorialspoint
Tesseract Image OCR Implementation
2 Lectures
Tutorialspoint
Cv2.imshow() Not Responding / OCR Text not printing Issue Fix
2 Lectures
Tutorialspoint
Introduction to CNN - Convolutional Neural Networks - Theory Session
1 Lectures
Tutorialspoint
Installing Additional Dependencies for CNN
3 Lectures
Tutorialspoint
Introduction to VGGNet Architecture
1 Lectures
Tutorialspoint
Image Recognition using Pre-Trained VGGNet16 Model
3 Lectures
Tutorialspoint
Image Recognition using Pre-Trained VGGNet19 Model
1 Lectures
Tutorialspoint
Image Recognition using Pre-Trained ResNet Model
1 Lectures
Tutorialspoint
Image Recognition using Pre-Trained Inception Model
1 Lectures
Tutorialspoint
Image Recognition using Pre-Trained Xception Model
1 Lectures
Tutorialspoint
Introduction to MobileNet-SSD Pretrained Model
1 Lectures
Tutorialspoint
Mobilenet SSD Object Detection
2 Lectures
Tutorialspoint
Mobilenet SSD Realtime Video
1 Lectures
Tutorialspoint
Mobilenet SSD Pre-saved Video
1 Lectures
Tutorialspoint
Mask RCNN Pre-trained model Introduction
1 Lectures
Tutorialspoint
MaskRCNN Bounding Box Implementation
2 Lectures
Tutorialspoint
MaskRCNN Object Mask Implementation
2 Lectures
Tutorialspoint
MaskRCNN Realtime Video
2 Lectures
Tutorialspoint
MaskRCNN Pre-saved Video
1 Lectures
Tutorialspoint
YOLO Pre-trained Model Introduction
1 Lectures
Tutorialspoint
YOLO Implementation
2 Lectures
Tutorialspoint
YOLO Real-time Video
1 Lectures
Tutorialspoint
YOLO Pre-saved Video
1 Lectures
Tutorialspoint
Tiny YOLO Pre-saved Video
1 Lectures
Tutorialspoint
Tiny YOLO Real-time Video
1 Lectures
Tutorialspoint
YOLOv4 - Step 1 - Updating OpenCV version
1 Lectures
Tutorialspoint
YOLOv4 - Step 2 - Object Recognition Implementation
1 Lectures
Tutorialspoint

Instructor Details

Abhilash Nelson

Abhilash Nelson

I am a pioneering, talented and security-oriented Android/iOS Mobile and PHP/Python Web Developer Application Developer offering more than eight years’ overall IT experience which involves designing, implementing, integrating, testing and supporting impact-full web and mobile applications.

I am a Post Graduate Masters Degree holder in Computer Science and Engineering.

My experience with PHP/Python Programming is an added advantage for server based Android and iOS Client Applications.

Course Certificate

Use your certificate to make a career change or to advance in your current career.

sample Tutorialspoint certificate

Our students work
with the Best

Feedbacks

D

Dr Aravinda C.V

nice

A

Apoorv garg

The content is not available

Related Video Courses

View More

Annual Membership

Become a valued member of Tutorials Point and enjoy unlimited access to our vast library of top-rated Video Courses

Subscribe now
Annual Membership

Online Certifications

Master prominent technologies at full length and become a valued certified professional.

Explore Now
Online Certifications

Talk to us

1800-202-0515