What are auto-associative neural networks?


Autoencoder networks, which are also referred to as auto-associative neural networks, are a specific type of neural network that is really good at replicating input patterns at the output layer and they can be achieved significant accomplishments in various domains, such as identifying patterns, analyzing biological information, recognizing speech, and validating signals. By mimicking and investigating the process of association, these networks offer a highly effective tool for representing data and reducing its complexity.

A training procedure is used in auto-associative neural networks to collect input patterns and their related outputs. Even when the inputs are distorted or loud, the network learns to store and retrieve patterns. Input and output vectors are used to represent these patterns.

Understanding Auto−Associative Neural Networks

Auto-associative neural networks are special kinds of neural networks where the input and output vectors are identical and the main goal of these networks is to learn the associations between input patterns and their corresponding outputs and it is achieved through a process of training, where the network learns to store and retrieve patterns from distorted or noisy inputs.

Training Auto−Associative Neural Networks

The training process of auto−associative neural networks involves setting the weights between the units based on the correlation between input and output vectors. The Hebb rule, a classic learning rule, is commonly used to determine the weights. According to the Hebb rule, when two vectors are positively correlated the strength of the connection between them will be increased. The negative correlation indicates a decrease in the connection strength between two vectors.

Architecture of Auto−Associative Neural Networks

The architecture of an auto-associative neural network consists of a five-layer perceptron feed−forward network. The network can be divided into two neural networks of three layers, connected in series. The first network shrinks the information of the input vector into a smaller dimension. The second network works in the opposite direction by using compressed information to restore the original input. The key component of this architecture is the bottleneck layer which provides data shrink and powerful feature extraction capabilities of the layer.

Here is a visual representation of the architecture:

Training Auto−Associative Neural Networks

The training process of auto−associative neural networks involves setting the weights between the units based on the correlation between input and output vectors. The Hebb rule, a classic learning rule, is commonly used to determine the weights. According to the Hebb rule, when two vectors are positively correlated, the connection strength between them should be increased. If we talk about a negative correlation indicates a decrease in the connection strength.

The weight matrix, denoted as W, is calculated using the formula:

$$\mathrm{\mathrm{W=\sum_{p=1}^pS^{T}(p)S(p)}}$$

Where T is the learning rate and S(p) represents the distinct n-dimensional prototype patterns.

Testing and Inference with Auto−Associative Neural Networks

To determine whether an input is "known" or "unknown" to the model, a testing or inference algorithm is used. The following are the steps in the algorithm :

  • Use the weights generated during the training phase.

  • Set the activation of the input units equal to the input vectors.

  • Calculate the net input to each output unit using the formula

  • $$\mathrm{y-in_{j}=\sum_{i}x_{i}w_{ij}}$$

  • Apply an activation function to the net input to calculate the output, where the output is +1 if the net input is greater than 0, and −1 otherwise.

If the output unit produces the same pattern as one stored in the network, the input vector is recognized as "known."

Storage Capacity of Auto-Associative Neural Networks

Orthogonal vectors can increase the storage capacity of auto−associative neural networks, but the relationship between the number of orthogonal vectors and the network's recall ability is more complex than n-1. Factors like dimensionality, network architecture, and training algorithm influence the storage capacity. While orthogonal vectors reduce interference, other factors such as activation functions and optimization algorithms also play a role. Thus, the exact relationship is intricate and depends on various factors.

Applications of Auto−Associative Neural Networks

Auto−associative neural networks possess a wide range of applications owing to their exceptional ability to learn and recognize patterns. Let's delve into some of the key applications in more detail:

  • Pattern Recognition: Auto−associative neural networks are extensively employed in pattern recognition tasks across various domains. They have the capability to learn intricate patterns and classify them accurately. In the realm of image recognition, these networks can identify objects, faces, and intricate visual patterns. In speech recognition, they can decipher spoken words and distinguish between different phonetic sounds. Moreover, auto−associative neural networks have been effective in handwriting recognition, enabling the digitization of handwritten documents.

  • Voice Recognition: Auto−associative neural networks play a pivotal role in voice recognition systems. By training on a diverse range of voices, these networks can learn to identify and distinguish different speakers. This capability has enabled the development of voice−controlled devices, where the network can accurately recognize and respond to specific voice commands. In speaker verification systems, auto−associative neural networks verify the authenticity of a claimed speaker based on their voice characteristics, bolstering security measures.

  • Signal Validation: Auto−associative neural networks are highly effective in signal validation tasks. In scenarios where signals may be corrupted by noise or interference, these networks can learn to differentiate between valid and invalid signals. This capability improves the accuracy and reliability of signal processing systems. For instance, in telecommunications, auto-associative neural networks can assist in filtering out noise from received signals, resulting in improved data transmission and reception quality.

In conclusion, auto−associative neural networks offer an invaluable tool for simulating and exploring the associative process. Their remarkable pattern recognition abilities, coupled with their applications in diverse fields like bioinformatics, voice recognition, and signal validation, continue to drive advancements in artificial intelligence and machine learning.

Updated on: 11-Oct-2023

172 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements