What is Prototype-Based Clustering?

In prototype-based clustering, a cluster is a group of objects in which some object is nearer to the prototype that represents the cluster than to the prototype of some other cluster. A simple prototype-based clustering algorithm that needs the centroid of the elements in a cluster as the prototype of the cluster.

There are various approaches of Prototype-Based clustering which are as follows −

  • Objects are enabled to belong to higher than one cluster. Furthermore, an object belongs to each cluster with some weight. Such a method addresses the fact that some objects are similarly close to multiple cluster prototypes.

  • A cluster is modeled as a statistical distribution, i.e., objects are produced by a random phase from a statistical distribution that is features by a multiple statistical parameters, including the mean and variance. This viewpoint generalizes the concept of a prototype and allows the need of wellestablished statistical approaches.

  • Clusters are constrained to have constant associations. These relationships are constraints that defines neighborhood relationships such as the degree to which two clusters are neighbors of each other. Constraining the relationships between clusters can define the execution and visualization of the data.

Fuzzy c-means uses concepts from the area of fuzzy logic and fuzzy set theory to propose a clustering designs, which is like K-means, but which does not needed a hard assignment of a point to just one cluster.

Mixture model clustering takes the method that a group of clusters can be modeled as a combination of distributions, one for each cluster. The clustering scheme depends on Self-Organizing Maps (SOM) implements clustering within a structure that needed clusters to have a pre-specified associations to one another including two-dimensional grid structure.

Fuzzy Clustering − If data objects are distributed in well-independent sets, then a crisp description of the objects into disjoint clusters seems like an ideal method. But in some cases, the objects in a data set cannot be divided into well-independent clusters, and there will be a specific arbitrariness in assigning an object to a specific cluster.

Consider an object that lies close the boundary of two clusters, but is nearer to one of them. In some cases, it can be more suitable to assign a weight to every object and each cluster that denotes the degree to which the object ahead to the cluster.

Probabilistic methods can also support such weights. While probabilistic methods are beneficial in several situations, there are times when it is complex to decide an appropriate statistical model. In general cases, non-probabilistic clustering methods are required to provide same capabilities.