Word embedding is a technology in Natural Language Processing (NLP). The aim is to have a represenation of text into a vector space, where semantically similar texts are close to each other. There are two approaches: representing words as vector of co-occurring words and as vectors of context words. Various techniques have been used, though currently mostly neural network based techniques.