Python - Word Embedding using Word2Vec


Word Embedding is a language modeling technique used for mapping words to vectors of real numbers. It represents words or phrases in vector space with several dimensions. Word embeddings can be generated using various methods like neural networks, co-occurrence matrix, probabilistic models, etc.

Word2Vec consists of models for generating word embedding. These models are shallow two-layer neural networks having one input layer, one hidden layer and one output layer.

Example

# importing all necessary modules
from nltk.tokenize import sent_tokenize, word_tokenize
import warnings
warnings.filterwarnings(action = 'ignore')
import gensim
from gensim.models import Word2Vec  
#  Reads ‘alice.txt’ file
sample = open("C:\Users\Vishesh\Desktop\alice.txt", "r")
s = sample.read()  
# Replaces escape character with space
f = s.replace("\n", " ")
data = []  
# iterate through each sentence in the file
for i in sent_tokenize(f):
   temp = []    
   # tokenize the sentence into words
   for j in word_tokenize(i):
      temp.append(j.lower())  
   data.append(temp)  
# Create CBOW model
model1 = gensim.models.Word2Vec(data, min_count = 1,  size = 100, window = 5)  
# Print results
print("Cosine similarity between 'alice' " + "and 'wonderland' - CBOW : ", model1.similarity('alice', 'wonderland'))    
print("Cosine similarity between 'alice' " + "and 'machines' - CBOW : ", model1.similarity('alice', 'machines'))  
# Create Skip Gram model
model2 = gensim.models.Word2Vec(data, min_count = 1, size = 100, window =5, sg = 1)
# Print results
print("Cosine similarity between 'alice' " + "and 'wonderland' - Skip Gram : ", model2.similarity('alice', 'wonderland'))      
print("Cosine similarity between 'alice' " + "and 'machines' - Skip Gram : ", model2.similarity('alice', 'machines'))

Updated on: 08-Aug-2020

453 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements