Jaisshree

Jaisshree

95 Articles Published

Articles by Jaisshree

Page 4 of 10

Singular Value Decomposition

Jaisshree
Jaisshree
Updated on 27-Mar-2026 671 Views

Singular Value Decomposition (SVD) is a powerful mathematical technique used in machine learning to analyze large and complex datasets. It decomposes a matrix into three simpler matrices, making it easier to understand patterns and reduce dimensionality. For any matrix A, SVD factorizes it as A = UΣVT, where: U contains the left singular vectors (eigenvectors of AAT) Σ is a diagonal matrix of singular values (square roots of eigenvalues) VT contains the right singular vectors (eigenvectors of ATA) Mathematical Algorithm The SVD computation follows these steps: Given matrix A, compute ATA (transpose of ...

Read More

Separating Planes In SVM

Jaisshree
Jaisshree
Updated on 27-Mar-2026 273 Views

Support Vector Machine (SVM) is a supervised algorithm used widely in handwriting recognition, sentiment analysis and many more. To separate different classes, SVM calculates the optimal hyperplane that more or less accurately creates a margin between the two classes. Here are a few ways to optimize hyperplanes in SVM ? Data Processing − SVM requires data that is normalised, scaled and centred since they are sensitive to such features. Choose a Kernel − A kernel function is used to transform the input into a higher dimensional space. Some of them include linear, polynomial and radial base functions. ...

Read More

Reduce Data Dimensionality using PCA - Python

Jaisshree
Jaisshree
Updated on 27-Mar-2026 362 Views

Any dataset used in Machine Learning algorithms may have numerous dimensions. However, not all of them contribute to efficient output and simply cause the ML Model to perform poorly because of increased size and complexity. Thus, it becomes important to eliminate such features from the dataset using Principal Component Analysis (PCA). PCA helps in removing dimensions from the dataset that do not optimize results, thereby creating a smaller and simpler dataset with most of the original and useful information. PCA is based on feature extraction, which maps data from higher dimensional space to lower dimensional space while maximizing variance. ...

Read More

Recommendation System in Python

Jaisshree
Jaisshree
Updated on 27-Mar-2026 803 Views

Recommendation systems are tools in Python that suggest items or content to users based on their preferences and past behaviors. This technology utilizes algorithms to predict users' future preferences, thereby providing them with the most relevant content. The scope of this system is vast, with widespread use in various industries such as e-commerce, streaming services, and social media. Products, movies, music, books, and more can all be recommended through these systems. The provision of personalized recommendations not only helps foster customer engagement and loyalty but can also boost sales. Types of Recommendation Systems Content-Based Recommendation Systems ...

Read More

Implement Deep Autoencoder in PyTorch for Image Reconstructionp

Jaisshree
Jaisshree
Updated on 27-Mar-2026 821 Views

Deep autoencoders are neural networks that compress input data into a lower-dimensional representation and then reconstruct it back to its original form. In this tutorial, we'll implement a deep autoencoder in PyTorch to reconstruct MNIST handwritten digit images. What is an Autoencoder? An autoencoder consists of two main components: Encoder: Compresses input data into a latent representation Decoder: Reconstructs the original data from the compressed representation The goal is to minimize reconstruction error between input and output, forcing the network to learn meaningful data representations. This makes autoencoders useful for data compression, image denoising, ...

Read More

Explaining the Language in Natural Language

Jaisshree
Jaisshree
Updated on 27-Mar-2026 237 Views

Natural Language Processing (NLP) enables computers to understand and process human language just like chatbots and translation tools do. NLP uses various techniques to analyze text and extract meaningful information from the complex structure of human languages. Natural Language Processing (NLP) uses several key techniques to process natural language effectively ? Lemmatization − Reduces words to their root forms or lemma. For example, "bravery" becomes "brave". Tokenization − Breaks sentences into individual words called tokens for algorithmic processing. Stemming − Removes prefixes and suffixes from words. For example, "playing" becomes "play". NLP applications include text ...

Read More

Classification of Text Documents using the Naive Bayes approach in Python

Jaisshree
Jaisshree
Updated on 27-Mar-2026 355 Views

Naive Bayes algorithm is a powerful tool for classifying text documents into different categories. For example, if a document contains words like 'humid', 'rainy', or 'cloudy', we can use the Bayes algorithm to determine if this document belongs to a 'sunny day' or 'rainy day' category. The algorithm works on the assumption that words in documents are independent of each other. While this assumption is rarely true in natural language, the algorithm still performs well enough in practice − hence the term 'naive' in its name. Algorithm Steps Step 1 − Input the documents, text strings ...

Read More

BLEU Score for Evaluating Neural Machine Translation using Python

Jaisshree
Jaisshree
Updated on 27-Mar-2026 757 Views

Using NMT or Neural Machine Translation in NLP, we can translate a text from a given language to a target language. To evaluate how well the translation is performed, we use the BLEU or Bilingual Evaluation Understudy score in Python. The BLEU Score works by comparing machine translated sentences to human translated sentences, both in n-grams. Also, with the increase in sentence length, the BLEU score decreases. In general, a BLEU score is in the range from 0 to 1 and a higher value indicates a better quality. However, achieving a perfect score is very rare. Note that the ...

Read More

What is Standardization in Machine Learning

Jaisshree
Jaisshree
Updated on 27-Mar-2026 1K+ Views

Standardization is a crucial preprocessing technique in machine learning that ensures all features are on the same scale. This process transforms data to have a mean of 0 and a standard deviation of 1, making features comparable and improving model performance. What is Standardization? Standardization, also known as Z-score normalization, is a feature scaling technique that transforms data by subtracting the mean and dividing by the standard deviation. This process ensures that all features contribute equally to machine learning algorithms that are sensitive to feature scale. Mathematical Formula The standardization formula is ? Z ...

Read More

SPSA (Simultaneous Perturbation Stochastic Approximation) Algorithm using Python

Jaisshree
Jaisshree
Updated on 27-Mar-2026 701 Views

The Simultaneous Perturbation Stochastic Approximation (SPSA) algorithm is a gradient-free optimization method that finds the minimum of an objective function by simultaneously perturbing all parameters. Unlike traditional gradient descent, SPSA estimates gradients using only two function evaluations per iteration, regardless of the parameter dimension. SPSA is particularly effective for optimizing noisy, non-differentiable functions or problems with many parameters where computing exact gradients is computationally expensive or impossible. How SPSA Works The algorithm estimates the gradient by evaluating the objective function at two points: the current parameter values plus and minus a random perturbation. This simultaneous perturbation of ...

Read More
Showing 31–40 of 95 articles
« Prev 1 2 3 4 5 6 10 Next »
Advertisements