What is Active Learning?


Active learning is a repetitive type of supervised learning that is relevant for situations where data are sufficient, but the class labels are scarce or costly to acquire. The learning algorithm is active in that it can carefully query a user (e.g., a person oracle) for labels. The multiple tuples used to understand a concept this method is smaller than the number needed in typical supervised learning.

It is used to maintain costs down, the active learner objective to achieve high accuracy utilizing as few labeled examples as possible. Let D be all of data under consideration. There are several methods continue for active learning on D.

Consider that a small subset of D is class-labeled. This set is indicated by L. U is the set of unlabeled data in D. It is also defined as a pool of unlabeled data. An active learner starts with L as the original training set. It can uses a querying service to carefully choose one or more data samples from U and requests labels for them from an oracle (e.g., a person annotator).

The newly labeled samples are inserted to L, which the learner need in a standard supervised method. The process continues. The active learning aim is to implement high accuracy using as few labeled tuples as applicable. Active learning algorithms are generally computed with the use of learning curves, which plot accuracy as a function of the multiple instances queried.

Some active learning research targets on how to select the data tuples to be queried. There are various frameworks have been proposed. Uncertainty sampling is common, where the active learner selects to query the tuples which it is the least specific how to label.

There are several methods work to reduce the version space, i.e., the subset of all hypotheses that are dependable with the observed training tuples. It can follow a decision-theoretic method that calculates expected error reduction.

This choose tuples that can result in the highest reduction in the total number of incorrect predictions including by decreasing the expected entropy over U. This method influence to be more computationally high.

The aim of transfer learning is to derive the knowledge from one or more source functions and use the knowledge to a target task. Traditional learning approach construct a new classifier for each new classification task, depends on available class-labeled training and test information.

Transfer learning algorithms apply knowledge about source services when constructing a classifier for a new (target) task. The development of the resulting classifier needed fewer training data and less training time. Traditional learning algorithms consider that the training data and test data are drawn from the same distribution and the same feature area. Therefore, if the distribution changes, such techniques required to rebuild the models from scratch.

Ginni
Ginni

e

Updated on: 18-Feb-2022

306 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements