Deep learning with word2vec and gensim

Radim Řehůřek gensim, programming 33 Comments

Neural networks have been a bit of a punching bag historically: neither particularly fast, nor robust or accurate, nor open to introspection by humans curious to gain insights from them. But things have been changing lately, with deep learning becoming a hot topic in academia with spectacular results. I decided to check out one deep learning algorithm via gensim.

Word2vec: the good, the bad (and the fast)

The kind folks at Google have recently published several new unsupervised, deep learning algorithms in this article.

Selling point: “Our model can answer the query “give me a word like king, like woman, but unlike man” with “queen“. Pretty cool.

Not only do these algorithms boast great performance, accuracy and a theoretically-not-so-well-founded-but-pragmatically-superior-model (all three solid plusses in my book), but they were also devised by my fellow country and county-man, Tomáš Mikolov from Brno! The googlers have also released an open source implementation of these algorithms, which always helps with uptake of fresh academic ideas. Brilliant.

Although, in words of word2vec’s authors, the toolkit is meant for “research purposes”, it’s actually optimized C, down to cache alignments, memory look-up tables, static memory allocations and a penchant for single letter variable names. Somebody obviously spent time profiling this, which is good news for people running it, and bad news for people wanting to understand it, extend it or integrate it (as researchers are wont to do).

In short, the spirit of word2vec fits gensim’s tagline of topic modelling for humans, but the actual code doesn’t, tight and beautiful as it is. I therefore decided to reimplement word2vec in gensim, starting with the hierarchical softmax skip-gram model, because that’s the one with the best reported accuracy. I reimplemented it from scratch, de-obfuscating word2vec into a less menial state. No need for a custom implementation of hashing, lists, dicts, random number generators… all of these come built-in with Python.

Free, fast, pretty — pick any two. As the ratio of clever code to comments shrank and shrank (down to ~100 Python lines, with 40% of them comments), so did the performance. About 1000x. Yuck. I rewrote the explicit Python loops in NumPy, speeding things up ~50x (yay), but that means it’s still ~20x slower than the original (ouch). I could optimize it further, using Cython and whatnot, but that would lead back to obfuscation, beating the purpose of this exercise. I may still do it anyway, for selected hotspots. EDIT: Done, see Part II: Optimizing word2vec in Python — performance of the Python port is now on par with the C code, and sometimes even faster.

For now, the code lives in a git branch, to be merged into gensim proper once I’m happy with its functionality and performance. In the meanwhile, the gensim version is already good enough to be unleashed on reasonably-sized corpora, taking on natural language processing tasks “the Python way”. EDIT: Done, merged into gensim release 0.8.8. Installation instructions.

So, what can it do?

Distributional semantics goodness; see here and the original article for more background. Basically, the algorithm takes some unstructured text and learns “features” about each word. The neat thing is (apart from it learning the features completely automatically, without any human input/supervision!) that these features capture different relationships — both semantic and syntactic. This allows some (very basic) algebraic operations, like the above mentioned “kingman+woman=queen“. More concretely:

>>> # import modules and set up logging
>>> from gensim.models import word2vec
>>> import logging
>>> logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
>>> # load up unzipped corpus from http://mattmahoney.net/dc/text8.zip
>>> sentences = word2vec.Text8Corpus('/tmp/text8')
>>> # train the skip-gram model; default window=5
>>> model = word2vec.Word2Vec(sentences, size=200)
>>> # ... and some hours later... just as advertised...
>>> model.most_similar(positive=['woman', 'king'], negative=['man'], topn=1)
[('queen', 0.5359965)]

>>> # pickle the entire model to disk, so we can load&resume training later
>>> model.save('/tmp/text8.model')
>>> # store the learned weights, in a format the original C tool understands
>>> model.save_word2vec_format('/tmp/text8.model.bin', binary=True)
>>> # or, import word weights created by the (faster) C word2vec
>>> # this way, you can switch between the C/Python toolkits easily
>>> model = word2vec.Word2Vec.load_word2vec_format('/tmp/vectors.bin', binary=True)

>>> # "boy" is to "father" as "girl" is to ...?
>>> model.most_similar(['girl', 'father'], ['boy'], topn=3)
[('mother', 0.61849487), ('wife', 0.57972813), ('daughter', 0.56296098)]
>>> more_examples = ["he his she", "big bigger bad", "going went being"]
>>> for example in more_examples:
...     a, b, x = example.split()
...     predicted = model.most_similar([x, b], [a])[0][0]
...     print "'%s' is to '%s' as '%s' is to '%s'" % (a, b, x, predicted)
'he' is to 'his' as 'she' is to 'her'
'big' is to 'bigger' as 'bad' is to 'worse'
'going' is to 'went' as 'being' is to 'was'

>>> # which word doesn't go with the others?
>>> model.doesnt_match("breakfast cereal dinner lunch".split())
'cereal'

This already beats the English of some of my friends 🙂

Python, sweet home

Note from Radim: Get my latest machine learning tips & articles delivered straight to your inbox (it's free).

 Unsubscribe anytime, no spamming. Max 2 posts per month, if lucky.

Having deep learning available in Python allows us to plug in the multitude of NLP tools available in Python. More intelligent tokenization/sentence splitting, named entity recognition? Just use NLTK. Web crawling, lemmatization? Try pattern. Removing boilerplate HTML and extracting meaningful, plain text? jusText. Continue the learning pipeline with k-means or other machine learning algos? Scikit-learn has loads.

Needless to say, better integration with gensim is also under way.

Part II: Optimizing word2vec in Python

Comments 33

  1. Rene Nederhand

    Hi Radim,

    I already played with word2vec. It’s a wonderful tool, but I miss the easiness of integrating it with python. Thanks for helping solving this issue with Gensim and giving us a true pythonic way to deal with these deep learning algorithms.

    Cheers,
    Rene

  2. Sujith PS

    Hi ,
    I used your tool, it is very interesting.
    But , while using word2vec in C, it took only 15minutes to train , in your application it took almost 9hours.

    1. Post
      Author
      Radim

      Hi Sujith, did you miss the link to “part II” at the end? 72x speed up. If you need absolutely top speed, you may be happy to hear there will be a “part III” too, with further optimizations.

      Also, the port is still very much in the making, so be sure to use the latest, develop branch from github and not the “stable” release version.

  3. Mark Pinches

    Hi Radim,

    Thanks so much for this. I’m trying to get started by loading the pretrained .bin files from the word2vec site ( freebase-vectors-skipgram1000.bin.gz). It loads fine, but when I run the most similar function. It cant find the words in the vocabulary. My error code is below.

    Any ideas where I’m going wrong?

    model.most_similar([‘girl’, ‘father’], [‘boy’], topn=3)
    2013-10-11 10:22:00,562 : WARNING : word ‘girl’ not in vocabulary; ignoring it
    2013-10-11 10:22:00,562 : WARNING : word ‘father’ not in vocabulary; ignoring it
    2013-10-11 10:22:00,563 : WARNING : word ‘boy’ not in vocabulary; ignoring it
    Traceback (most recent call last):
    File “”, line 1, in
    File “/Users/Mark/anaconda/python.app/Contents/lib/python2.7/site-packages/gensim-0.8.7-py2.7.egg/gensim/models/word2vec.py”, line 312, in most_similar
    raise ValueError(“cannot compute similarity with no input”)
    ValueError: cannot compute similarity with no input

    1. Post
      Author
      Radim

      Hi Mark, I never managed to download the freebase vectors (the download always came out corrupted for me). But its description on word2vec page suggests the words in freebase look like “/en/marvin_minsky” or “/en/godel_prize”. That’s why they don’t match your input. Check the freebase file contents for its word syntax.

  4. Mark Pinches

    Thanks Radim,

    I tried that (using the word structure you suggested) but no joy. I’m going to continue looking for a solution and if I come up with something, I’ll post it here.

    Cheers

    Mark

    1. Post
      Author
      Radim

      The syntax is no mystery Mark, really. Just open the file with e.g. “less” and see what’s in it.

      Alternatively, load it from gensim and do a `print my_model.index2word`.

  5. DG2

    Hi Radim. Thanks for the great work!

    I have been looking through the code and I’m curious about one thing with the “train_sentence” function. It seems to me that the architecture implied by the code tries to predict “word” using “word2” (since you are using the latent representation for word2 and the code for word), while the Skip gram architecture actually tries to estimate word2 from word.

    I have probably missed something, so sorry if this makes no sense.

    Cheers,
    D.

    1. Post
      Author
      Radim

      Hello, yes, the model tries to estimate context words based on the current word. In the code, this corresponds to
      syn0[word2.index] += dot(ga, l2a). The code also has to update the hierarchical embeddings (syn1), so maybe that’s what threw you off?

  6. Mark

    Hi Radim,
    Just to follow up on my earlier query above, for noobs and poor programmers (like myself)… the code model is essentially a dictionary and can be queried using my_model.index2word[0] . This should return a string.
    M

  7. Mark

    Hi again,
    I have 57,000 short strings of text, and I want to create vector representations of them for clustering. So how do I access each terms vector? for the calculation. Note I am using 0.8.7 gensim and when I query my_model[‘term’] I get the error
    Traceback (most recent call last):
    File “”, line 1, in
    TypeError: ‘Word2Vec’ object has no attribute ‘__getitem__’
    >>>
    any ideas?
    Thanks
    M

    1. Post
      Author
  8. m

    nevermind i just saw this line in your code sample:
    model = word2vec.Word2Vec.load_word2vec_format(‘/tmp/vectors.bin’, binary=True)

  9. irt24

    Is there a reason why word indices in the vocabulary are sometimes not consecutive? For instance [model.vocab[word].index for word in model.vocab.keys()[0:9]] is not always the list of numbers from 0 to 8. In one particular case, I got [0, 2, 3, 4, 5, 6, 8, 10, 11].

  10. Adam

    Hi Radim,

    Thanks for putting this together! I’m wondering how I can generate a list of similar words similar to the distance examples on the original word2vec page.

    For instance, instead of saying
    most_similar(postitive=[‘dog’, ‘cat’], negative= [‘kitten’])
    I’d like to just put in the word “dog” and then get the top 10 most similar, without having to make analogy type comparisons.

    Any ideas? Thanks again 🙂

    1. Post
      Author
  11. AAYUSH

    Hello Radim, I wish to know can the model be used to predict the next word in a phrase given a phrase. Can it be used in word prediction?

  12. Pingback: Why I cannot reproduce word2vec results using gensim - BlogoSfera

  13. Lachlan

    Hi, I’m pretty new to both machine learning and reddit so my apoligies if this topic is out of place, in the wrong subreddit, or not appropriate.

    This semester, my professor has asked me to investigate word2vec, by T Milokov and his team at Google, and particularly with regards to machine translation. For this task, I’m using the implementation of word2vec in the gensim package for python.

    In the paper (link below) Milokov describes how after training two monolingual models, they generate a translation matrix on the most frequently occurring 5000 words, and using this translation matrix, evaluate the accuracy of the translations of the following

    1000 words. Paper: http://arxiv.org/pdf/1309.4168.pdf

    Here are two screencaps, one of the description of the matrix in the paper and one of some clarification Milokov posted on a board.

    From paper: http://imgur.com/vdYyy2N Milokov post:
    http://imgur.com/UmMNHWY

    I have been playing around with the models I have generated for Japanese and English in gensim quite a bit. I downloaded wikipedia dumps, processed and tokenised them, and generated the models with gensim.

    I would like to emulate what Milokov did and see how accurate the translation are for Japanese/English (my two languages). I am unsure how to get the top 6000 words (5000 for make the trans vector, 1000 for testing), and especially how to produce the vector. I have read the papers and seen the algorithms but can’t quite put it into code.

    If anyone has some ideas/suggestions on how to do so, provide some pseudocode or has gensim knowledge and can lend a hand it would be greatly appreciated. I’m very motivated for this task but having difficulty progressing.

  14. parisa

    hi,i am new in gensim,i installed canopy and gensim but i don’t know how to work it ,i am confused pleas help and guid me
    thanks

  15. Pingback: Processing text | Pearltrees

  16. Pingback: Why I cannot reproduce word2vec results using gensim - HTML CODE

  17. Pingback: Google News Word2vec | I love You Zones

  18. Pingback: Getting started with Word2Vec | TextProcessing | A Text Processing Portal for Humans

  19. Pingback: The Word2Vec Algorithm – Site Title

Leave a Reply

Your email address will not be published. Required fields are marked *