gensim logo

gensim
gensim tagline

Get Expert Help From The Gensim Authors

Consulting in Machine Learning & NLP

• Commercial document similarity engine: ScaleText.ai

Corporate trainings in Python Data Science and Deep Learning

models.word2vec – Deep learning with word2vec

models.word2vec – Deep learning with word2vec

Produce word vectors with deep learning via word2vec’s “skip-gram and CBOW models”, using either hierarchical softmax or negative sampling [1] [2].

NOTE: There are more ways to get word vectors in Gensim than just Word2Vec. See FastText and wrappers for VarEmbed and WordRank.

The training algorithms were originally ported from the C package https://code.google.com/p/word2vec/ and extended with additional functionality.

For a blog tutorial on gensim word2vec, with an interactive web app trained on GoogleNews, visit http://radimrehurek.com/2014/02/word2vec-tutorial/

Make sure you have a C compiler before installing gensim, to use optimized (compiled) word2vec training (70x speedup compared to plain NumPy implementation [3]).

Initialize a model with e.g.:

>>> model = Word2Vec(sentences, size=100, window=5, min_count=5, workers=4)

Persist a model to disk with:

>>> model.save(fname)
>>> model = Word2Vec.load(fname)  # you can continue training with the loaded model!

The word vectors are stored in a KeyedVectors instance in model.wv. This separates the read-only word vector lookup operations in KeyedVectors from the training code in Word2Vec:

>>> model.wv['computer']  # numpy vector of a word
array([-0.00449447, -0.00310097,  0.02421786, ...], dtype=float32)

The word vectors can also be instantiated from an existing file on disk in the word2vec C format as a KeyedVectors instance.

NOTE: It is impossible to continue training the vectors loaded from the C format because hidden weights, vocabulary frequency and the binary tree is missing:

>>> from gensim.models import KeyedVectors
>>> word_vectors = KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False)  # C text format
>>> word_vectors = KeyedVectors.load_word2vec_format('/tmp/vectors.bin', binary=True)  # C binary format

You can perform various NLP word tasks with the model. Some of them are already built-in:

>>> model.wv.most_similar(positive=['woman', 'king'], negative=['man'])
[('queen', 0.50882536), ...]

>>> model.wv.most_similar_cosmul(positive=['woman', 'king'], negative=['man'])
[('queen', 0.71382287), ...]


>>> model.wv.doesnt_match("breakfast cereal dinner lunch".split())
'cereal'

>>> model.wv.similarity('woman', 'man')
0.73723527

Probability of a text under the model:

>>> model.score(["The fox jumped over a lazy dog".split()])
0.2158356

Correlation with human opinion on word similarity:

>>> model.wv.evaluate_word_pairs(os.path.join(module_path, 'test_data','wordsim353.tsv'))
0.51, 0.62, 0.13

And on analogies:

>>> model.wv.accuracy(os.path.join(module_path, 'test_data', 'questions-words.txt'))

and so on.

If you’re finished training a model (i.e. no more updates, only querying), then switch to the gensim.models.KeyedVectors instance in wv

>>> word_vectors = model.wv
>>> del model

to trim unneeded model memory = use much less RAM.

Note that there is a gensim.models.phrases module which lets you automatically detect phrases longer than one word. Using phrases, you can learn a word2vec model where “words” are actually multiword expressions, such as new_york_times or financial_crisis:

>>> bigram_transformer = gensim.models.Phrases(sentences)
>>> model = Word2Vec(bigram_transformer[sentences], size=100, ...)
[1]Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient Estimation of Word Representations in Vector Space. In Proceedings of Workshop at ICLR, 2013.
[2]Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed Representations of Words and Phrases and their Compositionality. In Proceedings of NIPS, 2013.
[3]Optimizing word2vec in gensim, http://radimrehurek.com/2013/09/word2vec-in-python-part-two-optimizing/
class gensim.models.word2vec.BrownCorpus(dirname)

Bases: object

Iterate over sentences from the Brown corpus (part of NLTK data).

class gensim.models.word2vec.LineSentence(source, max_sentence_length=10000, limit=None)

Bases: object

Simple format: one sentence = one line; words already preprocessed and separated by whitespace.

source can be either a string or a file object. Clip the file to the first limit lines (or not clipped if limit is None, the default).

Example:

sentences = LineSentence('myfile.txt')

Or for compressed files:

sentences = LineSentence('compressed_text.txt.bz2')
sentences = LineSentence('compressed_text.txt.gz')
class gensim.models.word2vec.PathLineSentences(source, max_sentence_length=10000, limit=None)

Bases: object

Works like word2vec.LineSentence, but will process all files in a directory in alphabetical order by filename. The directory can only contain files that can be read by LineSentence: .bz2, .gz, and text files. Any file not ending with .bz2 or .gz is assumed to be a text file. Does not work with subdirectories.

The format of files (either text, or compressed text files) in the path is one sentence = one line, with words already preprocessed and separated by whitespace.

source should be a path to a directory (as a string) where all files can be opened by the LineSentence class. Each file will be read up to limit lines (or not clipped if limit is None, the default).

Example:

sentences = PathLineSentences(os.getcwd() + '\corpus\')

The files in the directory should be either text files, .bz2 files, or .gz files.

class gensim.models.word2vec.Text8Corpus(fname, max_sentence_length=10000)

Bases: object

Iterate over sentences from the “text8” corpus, unzipped from http://mattmahoney.net/dc/text8.zip .

class gensim.models.word2vec.Word2Vec(sentences=None, size=100, alpha=0.025, window=5, min_count=5, max_vocab_size=None, sample=0.001, seed=1, workers=3, min_alpha=0.0001, sg=0, hs=0, negative=5, cbow_mean=1, hashfxn=<built-in function hash>, iter=5, null_word=0, trim_rule=None, sorted_vocab=1, batch_words=10000, compute_loss=False, callbacks=())

Bases: gensim.models.base_any2vec.BaseWordEmbeddingsModel

Class for training, using and evaluating neural networks described in https://code.google.com/p/word2vec/

If you’re finished training a model (=no more updates, only querying) then switch to the gensim.models.KeyedVectors instance in wv

The model can be stored/loaded via its save() and load() methods, or stored/loaded in a format compatible with the original word2vec implementation via wv.save_word2vec_format() and Word2VecKeyedVectors.load_word2vec_format().

Initialize the model from an iterable of sentences. Each sentence is a list of words (unicode strings) that will be used for training.

Parameters:
  • sentences (iterable of iterables) – The sentences iterable can be simply a list of lists of tokens, but for larger corpora, consider an iterable that streams the sentences directly from disk/network. See BrownCorpus, Text8Corpus or LineSentence in word2vec module for such examples. If you don’t supply sentences, the model is left uninitialized – use if you plan to initialize it in some other way.
  • sg (int {1, 0}) – Defines the training algorithm. If 1, skip-gram is employed; otherwise, CBOW is used.
  • size (int) – Dimensionality of the feature vectors.
  • window (int) – The maximum distance between the current and predicted word within a sentence.
  • alpha (float) – The initial learning rate.
  • min_alpha (float) – Learning rate will linearly drop to min_alpha as training progresses.
  • seed (int) – Seed for the random number generator. Initial vectors for each word are seeded with a hash of the concatenation of word + str(seed). Note that for a fully deterministically-reproducible run, you must also limit the model to a single worker thread (workers=1), to eliminate ordering jitter from OS thread scheduling. (In Python 3, reproducibility between interpreter launches also requires use of the PYTHONHASHSEED environment variable to control hash randomization).
  • min_count (int) – Ignores all words with total frequency lower than this.
  • max_vocab_size (int) – Limits the RAM during vocabulary building; if there are more unique words than this, then prune the infrequent ones. Every 10 million word types need about 1GB of RAM. Set to None for no limit.
  • sample (float) – The threshold for configuring which higher-frequency words are randomly downsampled, useful range is (0, 1e-5).
  • workers (int) – Use these many worker threads to train the model (=faster training with multicore machines).
  • hs (int {1,0}) – If 1, hierarchical softmax will be used for model training. If set to 0, and negative is non-zero, negative sampling will be used.
  • negative (int) – If > 0, negative sampling will be used, the int for negative specifies how many “noise words” should be drawn (usually between 5-20). If set to 0, no negative sampling is used.
  • cbow_mean (int {1,0}) – If 0, use the sum of the context word vectors. If 1, use the mean, only applies when cbow is used.
  • hashfxn (function) – Hash function to use to randomly initialize weights, for increased training reproducibility.
  • iter (int) – Number of iterations (epochs) over the corpus.
  • trim_rule (function) – Vocabulary trimming rule, specifies whether certain words should remain in the vocabulary, be trimmed away, or handled using the default (discard if word count < min_count). Can be None (min_count will be used, look to keep_vocab_item()), or a callable that accepts parameters (word, count, min_count) and returns either gensim.utils.RULE_DISCARD, gensim.utils.RULE_KEEP or gensim.utils.RULE_DEFAULT. Note: The rule, if given, is only used to prune vocabulary during build_vocab() and is not stored as part of the model.
  • sorted_vocab (int {1,0}) – If 1, sort the vocabulary by descending frequency before assigning word indexes.
  • batch_words (int) – Target size (in words) for batches of examples passed to worker threads (and thus cython routines).(Larger batches will be passed if individual texts are longer than 10000 words, but the standard cython code truncates to that maximum.)
  • compute_loss (bool) – If True, computes and stores loss value which can be retrieved using model.get_latest_training_loss().
  • callbacks – List of callbacks that need to be executed/run at specific stages during training.

Examples

Initialize and train a Word2Vec model

>>> from gensim.models import Word2Vec
>>> sentences = [["cat", "say", "meow"], ["dog", "say", "woof"]]
>>>
>>> model = Word2Vec(sentences, min_count=1)
>>> say_vector = model['say']  # get vector for word
accuracy(**kwargs)
build_vocab(sentences, update=False, progress_per=10000, keep_raw_vocab=False, trim_rule=None, **kwargs)

Build vocabulary from a sequence of sentences (can be a once-only generator stream). Each sentence is a iterable of iterables (can simply be a list of unicode strings too).

Parameters:
  • sentences (iterable of iterables) – The sentences iterable can be simply a list of lists of tokens, but for larger corpora, consider an iterable that streams the sentences directly from disk/network. See BrownCorpus, Text8Corpus or LineSentence in word2vec module for such examples.
  • update (bool) – If true, the new words in sentences will be added to model’s vocab.
  • progress_per (int) – Indicates how many words to process before showing/updating the progress.
build_vocab_from_freq(word_freq, keep_raw_vocab=False, corpus_count=None, trim_rule=None, update=False)

Build vocabulary from a dictionary of word frequencies. Build model vocabulary from a passed dictionary that contains (word,word count). Words must be of type unicode strings.

Parameters:
  • word_freq (dict) – Word,Word_Count dictionary.
  • keep_raw_vocab (bool) – If not true, delete the raw vocabulary after the scaling is done and free up RAM.
  • corpus_count (int) – Even if no corpus is provided, this argument can set corpus_count explicitly.
  • trim_rule (function) – Vocabulary trimming rule, specifies whether certain words should remain in the vocabulary, be trimmed away, or handled using the default (discard if word count < min_count). Can be None (min_count will be used, look to keep_vocab_item()), or a callable that accepts parameters (word, count, min_count) and returns either gensim.utils.RULE_DISCARD, gensim.utils.RULE_KEEP or gensim.utils.RULE_DEFAULT. Note: The rule, if given, is only used to prune vocabulary during build_vocab() and is not stored as part of the model.
  • update (bool) – If true, the new provided words in word_freq dict will be added to model’s vocab.

Examples

>>> from gensim.models import Word2Vec
>>>
>>> model= Word2Vec()
>>> model.build_vocab_from_freq({"Word1": 15, "Word2": 20})
clear_sims()

Removes all L2-normalized vectors for words from the model. You will have to recompute them using init_sims method.

cum_table
delete_temporary_training_data(replace_word_vectors_with_normalized=False)

Discard parameters that are used in training and score. Use if you’re sure you’re done training a model. If replace_word_vectors_with_normalized is set, forget the original vectors and only keep the normalized ones = saves lots of memory!

doesnt_match(**kwargs)

Deprecated. Use self.wv.doesnt_match() instead. Refer to the documentation for gensim.models.keyedvectors.WordEmbeddingsKeyedVectors.doesnt_match

estimate_memory(vocab_size=None, report=None)

Estimate required memory for a model using current settings and provided vocabulary size.

evaluate_word_pairs(**kwargs)

Deprecated. Use self.wv.evaluate_word_pairs() instead. Refer to the documentation for gensim.models.keyedvectors.WordEmbeddingsKeyedVectors.evaluate_word_pairs

get_latest_training_loss()
hashfxn
init_sims(replace=False)

init_sims() resides in KeyedVectors because it deals with syn0/vectors mainly, but because syn1 is not an attribute of KeyedVectors, it has to be deleted in this class, and the normalizing of syn0/vectors happens inside of KeyedVectors

intersect_word2vec_format(fname, lockf=0.0, binary=False, encoding='utf8', unicode_errors='strict')

Merge the input-hidden weight matrix from the original C word2vec-tool format given, where it intersects with the current vocabulary. (No words are added to the existing vocabulary, but intersecting words adopt the file’s weights, and non-intersecting words are left alone.)

Parameters:
  • fname (str) – The file path used to save the vectors in
  • binary (bool) – If True, the data wil be saved in binary word2vec format, else it will be saved in plain text.
  • lockf (float) – Lock-factor value to be set for any imported word-vectors; the default value of 0.0 prevents further updating of the vector during subsequent training. Use 1.0 to allow further training updates of merged vectors.
iter
layer1_size
classmethod load(*args, **kwargs)

Loads a previously saved Word2Vec model. Also see save().

Parameters:fname (str) – Path to the saved file.
Returns:Returns the loaded model as an instance of :class: ~gensim.models.word2vec.Word2Vec.
Return type:obj: ~gensim.models.word2vec.Word2Vec
classmethod load_word2vec_format(fname, fvocab=None, binary=False, encoding='utf8', unicode_errors='strict', limit=None, datatype=<type 'numpy.float32'>)

Deprecated. Use gensim.models.KeyedVectors.load_word2vec_format instead.

static log_accuracy()
min_count
most_similar(**kwargs)

Deprecated. Use self.wv.most_similar() instead. Refer to the documentation for gensim.models.keyedvectors.WordEmbeddingsKeyedVectors.most_similar

most_similar_cosmul(**kwargs)

Deprecated. Use self.wv.most_similar_cosmul() instead. Refer to the documentation for gensim.models.keyedvectors.WordEmbeddingsKeyedVectors.most_similar_cosmul

n_similarity(**kwargs)

Deprecated. Use self.wv.n_similarity() instead. Refer to the documentation for gensim.models.keyedvectors.WordEmbeddingsKeyedVectors.n_similarity

predict_output_word(context_words_list, topn=10)

Report the probability distribution of the center word given the context words as input to the trained model.

Parameters:
  • context_words_list – List of context words
  • topn (int) – Return topn words and their probabilities
Returns:

topn length list of tuples of (word, probability)

Return type:

obj: list of :obj: tuple

reset_from(other_model)

Borrow shareable pre-built structures (like vocab) from the other_model. Useful if testing multiple models in parallel on the same corpus.

sample
save(*args, **kwargs)

Save the model. This saved model can be loaded again using load(), which supports online training and getting vectors for vocabulary words.

Parameters:fname (str) – Path to the file.
save_word2vec_format(fname, fvocab=None, binary=False)

Deprecated. Use model.wv.save_word2vec_format instead.

score(sentences, total_sentences=1000000, chunksize=100, queue_factor=2, report_delay=1)

Score the log probability for a sequence of sentences (can be a once-only generator stream). Each sentence must be a list of unicode strings. This does not change the fitted model in any way (see Word2Vec.train() for that).

We have currently only implemented score for the hierarchical softmax scheme, so you need to have run word2vec with hs=1 and negative=0 for this to work.

Note that you should specify total_sentences; we’ll run into problems if you ask to score more than this number of sentences but it is inefficient to set the value too high.

See the article by [4] and the gensim demo at [5] for examples of how to use such scores in document classification.

[4]Taddy, Matt. Document Classification by Inversion of Distributed Language Representations, in Proceedings of the 2015 Conference of the Association of Computational Linguistics.
[5]https://github.com/piskvorky/gensim/blob/develop/docs/notebooks/deepir.ipynb
Parameters:
  • sentences (iterable of iterables) – The sentences iterable can be simply a list of lists of tokens, but for larger corpora, consider an iterable that streams the sentences directly from disk/network. See BrownCorpus, Text8Corpus or LineSentence in word2vec module for such examples.
  • total_sentences (int) – Count of sentences.
  • chunksize (int) – Chunksize of jobs
  • queue_factor (int) – Multiplier for size of queue (number of workers * queue_factor).
  • report_delay (float) – Seconds to wait before reporting progress.
similar_by_vector(**kwargs)

Deprecated. Use self.wv.similar_by_vector() instead. Refer to the documentation for gensim.models.keyedvectors.WordEmbeddingsKeyedVectors.similar_by_vector

similar_by_word(**kwargs)

Deprecated. Use self.wv.similar_by_word() instead. Refer to the documentation for gensim.models.keyedvectors.WordEmbeddingsKeyedVectors.similar_by_word

similarity(**kwargs)

Deprecated. Use self.wv.similarity() instead. Refer to the documentation for gensim.models.keyedvectors.WordEmbeddingsKeyedVectors.similarity

syn0_lockf
syn1
syn1neg
train(sentences, total_examples=None, total_words=None, epochs=None, start_alpha=None, end_alpha=None, word_count=0, queue_factor=2, report_delay=1.0, compute_loss=False, callbacks=())

Update the model’s neural weights from a sequence of sentences (can be a once-only generator stream). For Word2Vec, each sentence must be a list of unicode strings. (Subclasses may accept other examples.)

To support linear learning-rate decay from (initial) alpha to min_alpha, and accurate progress-percentage logging, either total_examples (count of sentences) or total_words (count of raw words in sentences) MUST be provided (if the corpus is the same as was provided to build_vocab(), the count of examples in that corpus will be available in the model’s corpus_count property).

To avoid common mistakes around the model’s ability to do multiple training passes itself, an explicit epochs argument MUST be provided. In the common and recommended case, where train() is only called once, the model’s cached iter value should be supplied as epochs value.

Parameters:
  • sentences (iterable of iterables) – The sentences iterable can be simply a list of lists of tokens, but for larger corpora, consider an iterable that streams the sentences directly from disk/network. See BrownCorpus, Text8Corpus or LineSentence in word2vec module for such examples.
  • total_examples (int) – Count of sentences.
  • total_words (int) – Count of raw words in sentences.
  • epochs (int) – Number of iterations (epochs) over the corpus.
  • start_alpha (float) – Initial learning rate.
  • end_alpha (float) – Final learning rate. Drops linearly from start_alpha.
  • word_count (int) – Count of words already trained. Set this to 0 for the usual case of training on all words in sentences.
  • queue_factor (int) – Multiplier for size of queue (number of workers * queue_factor).
  • report_delay (float) – Seconds to wait before reporting progress.
  • compute_loss (bool) – If True, computes and stores loss value which can be retrieved using model.get_latest_training_loss().
  • callbacks – List of callbacks that need to be executed/run at specific stages during training.

Examples

>>> from gensim.models import Word2Vec
>>> sentences = [["cat", "say", "meow"], ["dog", "say", "woof"]]
>>>
>>> model = Word2Vec(min_count=1)
>>> model.build_vocab(sentences)
>>> model.train(sentences, total_examples=model.corpus_count, epochs=model.iter)
wmdistance(**kwargs)

Deprecated. Use self.wv.wmdistance() instead. Refer to the documentation for gensim.models.keyedvectors.WordEmbeddingsKeyedVectors.wmdistance

class gensim.models.word2vec.Word2VecTrainables(vector_size=100, seed=1, hashfxn=<built-in function hash>)

Bases: gensim.utils.SaveLoad

classmethod load(fname, mmap=None)

Load a previously saved object (using save()) from file.

Parameters:
  • fname (str) – Path to file that contains needed object.
  • mmap (str, optional) – Memory-map option. If the object was saved with large arrays stored separately, you can load these arrays via mmap (shared memory) using mmap=’r’. If the file being loaded is compressed (either ‘.gz’ or ‘.bz2’), then `mmap=None must be set.

See also

save()

Returns:Object loaded from fname.
Return type:object
Raises:IOError – When methods are called on instance (should be called from class).
prepare_weights(hs, negative, wv, update=False, vocabulary=None)

Build tables and model weights based on final vocabulary settings.

reset_weights(hs, negative, wv)

Reset all projection weights to an initial (untrained) state, but keep the existing vocabulary.

save(fname_or_handle, separately=None, sep_limit=10485760, ignore=frozenset([]), pickle_protocol=2)

Save the object to file.

Parameters:
  • fname_or_handle (str or file-like) – Path to output file or already opened file-like object. If the object is a file handle, no special array handling will be performed, all attributes will be saved to the same file.
  • separately (list of str or None, optional) – If None - automatically detect large numpy/scipy.sparse arrays in the object being stored, and store them into separate files. This avoids pickle memory errors and allows mmap’ing large arrays back on load efficiently. If list of str - this attributes will be stored in separate files, the automatic check is not performed in this case.
  • sep_limit (int) – Limit for automatic separation.
  • ignore (frozenset of str) – Attributes that shouldn’t be serialize/store.
  • pickle_protocol (int) – Protocol number for pickle.

See also

load()

seeded_vector(seed_string, vector_size)

Create one ‘random’ vector (but deterministic by seed_string)

update_weights(hs, negative, wv)

Copy all the existing weights, and reset the weights for the newly added vocabulary.

class gensim.models.word2vec.Word2VecVocab(max_vocab_size=None, min_count=5, sample=0.001, sorted_vocab=True, null_word=0)

Bases: gensim.utils.SaveLoad

add_null_word(wv)
create_binary_tree(wv)

Create a binary Huffman tree using stored vocabulary word counts. Frequent words will have shorter binary codes. Called internally from build_vocab().

classmethod load(fname, mmap=None)

Load a previously saved object (using save()) from file.

Parameters:
  • fname (str) – Path to file that contains needed object.
  • mmap (str, optional) – Memory-map option. If the object was saved with large arrays stored separately, you can load these arrays via mmap (shared memory) using mmap=’r’. If the file being loaded is compressed (either ‘.gz’ or ‘.bz2’), then `mmap=None must be set.

See also

save()

Returns:Object loaded from fname.
Return type:object
Raises:IOError – When methods are called on instance (should be called from class).
make_cum_table(wv, power=0.75, domain=2147483647)

Create a cumulative-distribution table using stored vocabulary word counts for drawing random words in the negative-sampling training routines.

To draw a word index, choose a random integer up to the maximum value in the table (cum_table[-1]), then finding that integer’s sorted insertion point (as if by bisect_left or ndarray.searchsorted()). That insertion point is the drawn index, coming up in proportion equal to the increment at that slot.

Called internally from ‘build_vocab()’.

prepare_vocab(hs, negative, wv, update=False, keep_raw_vocab=False, trim_rule=None, min_count=None, sample=None, dry_run=False)

Apply vocabulary settings for min_count (discarding less-frequent words) and sample (controlling the downsampling of more-frequent words).

Calling with dry_run=True will only simulate the provided settings and report the size of the retained vocabulary, effective corpus length, and estimated memory requirements. Results are both printed via logging and returned as a dict.

Delete the raw vocabulary after the scaling is done to free up RAM, unless keep_raw_vocab is set.

save(fname_or_handle, separately=None, sep_limit=10485760, ignore=frozenset([]), pickle_protocol=2)

Save the object to file.

Parameters:
  • fname_or_handle (str or file-like) – Path to output file or already opened file-like object. If the object is a file handle, no special array handling will be performed, all attributes will be saved to the same file.
  • separately (list of str or None, optional) – If None - automatically detect large numpy/scipy.sparse arrays in the object being stored, and store them into separate files. This avoids pickle memory errors and allows mmap’ing large arrays back on load efficiently. If list of str - this attributes will be stored in separate files, the automatic check is not performed in this case.
  • sep_limit (int) – Limit for automatic separation.
  • ignore (frozenset of str) – Attributes that shouldn’t be serialize/store.
  • pickle_protocol (int) – Protocol number for pickle.

See also

load()

scan_vocab(sentences, progress_per=10000, trim_rule=None)

Do an initial scan of all words appearing in sentences.

sort_vocab(wv)

Sort the vocabulary so the most frequent words have the lowest indexes.

gensim.models.word2vec.score_cbow_pair(model, word, l1)
gensim.models.word2vec.score_sg_pair(model, word, word2)
gensim.models.word2vec.train_cbow_pair(model, word, input_word_indices, l1, alpha, learn_vectors=True, learn_hidden=True, compute_loss=False, context_vectors=None, context_locks=None, is_ft=False)
gensim.models.word2vec.train_sg_pair(model, word, context_index, alpha, learn_vectors=True, learn_hidden=True, context_vectors=None, context_locks=None, compute_loss=False, is_ft=False)