gensim logo

gensim
gensim tagline

Get Expert Help

• machine learning, NLP, data mining

• custom SW design, development, optimizations

• corporate trainings & IT consulting

models.ldamulticore – parallelized Latent Dirichlet Allocation

models.ldamulticore – parallelized Latent Dirichlet Allocation

Latent Dirichlet Allocation (LDA) in Python, using all CPU cores to parallelize and speed up model training.

The parallelization uses multiprocessing; in case this doesn’t work for you for some reason, try the gensim.models.ldamodel.LdaModel class which is an equivalent, but more straightforward and single-core implementation.

The training algorithm:

  • is streamed: training documents may come in sequentially, no random access required,
  • runs in constant memory w.r.t. the number of documents: size of the training corpus does not affect memory footprint, can process corpora larger than RAM

Wall-clock performance on the English Wikipedia (2G corpus positions, 3.5M documents, 100K features, 0.54G non-zero entries in the final bag-of-words matrix), requesting 100 topics:

algorithm training time
LdaMulticore(workers=1) 2h30m
LdaMulticore(workers=2) 1h24m
LdaMulticore(workers=3) 1h6m
old LdaModel() 3h44m
simply iterating over input corpus = I/O overhead 20m

(Measured on this i7 server with 4 physical cores, so that optimal workers=3, one less than the number of cores.)

This module allows both LDA model estimation from a training corpus and inference of topic distribution on new, unseen documents. The model can also be updated with new documents for online training.

The core estimation code is based on the onlineldavb.py script by M. Hoffman [1], see Hoffman, Blei, Bach: Online Learning for Latent Dirichlet Allocation, NIPS 2010.

[1]http://www.cs.princeton.edu/~mdhoffma
class gensim.models.ldamulticore.LdaMulticore(corpus=None, num_topics=100, id2word=None, workers=None, chunksize=2000, passes=1, batch=False, alpha='symmetric', eta=None, decay=0.5, offset=1.0, eval_every=10, iterations=50, gamma_threshold=0.001, random_state=None, minimum_probability=0.01, minimum_phi_value=0.01, per_word_topics=False)

Bases: gensim.models.ldamodel.LdaModel

The constructor estimates Latent Dirichlet Allocation model parameters based on a training corpus:

>>> lda = LdaMulticore(corpus, num_topics=10)

You can then infer topic distributions on new, unseen documents, with

>>> doc_lda = lda[doc_bow]

The model can be updated (trained) with new documents via

>>> lda.update(other_corpus)

Model persistency is achieved through its load/save methods.

If given, start training from the iterable corpus straight away. If not given, the model is left untrained (presumably because you want to call update() manually).

num_topics is the number of requested latent topics to be extracted from the training corpus.

id2word is a mapping from word ids (integers) to words (strings). It is used to determine the vocabulary size, as well as for debugging and topic printing.

workers is the number of extra processes to use for parallelization. Uses all available cores by default: workers=cpu_count()-1. Note: for hyper-threaded CPUs, cpu_count() returns a useless number – set workers directly to the number of your real cores (not hyperthreads) minus one, for optimal performance.

If batch is not set, perform online training by updating the model once every workers * chunksize documents (online training). Otherwise, run batch LDA, updating model only once at the end of each full corpus pass.

alpha and eta are hyperparameters that affect sparsity of the document-topic (theta) and topic-word (lambda) distributions. Both default to a symmetric 1.0/num_topics prior.

alpha can be set to an explicit array = prior of your choice. It also support special values of ‘asymmetric’ and ‘auto’: the former uses a fixed normalized asymmetric 1.0/topicno prior, the latter learns an asymmetric prior directly from your data.

eta can be a scalar for a symmetric prior over topic/word distributions, or a matrix of shape num_topics x num_words, which can be used to impose asymmetric priors over the word distribution on a per-topic basis. This may be useful if you want to seed certain topics with particular words by boosting the priors for those words.

Calculate and log perplexity estimate from the latest mini-batch once every eval_every documents. Set to None to disable perplexity estimation (faster), or to 0 to only evaluate perplexity once, at the end of each corpus pass.

decay and offset parameters are the same as Kappa and Tau_0 in Hoffman et al, respectively.

random_state can be a numpy.random.RandomState object or the seed for one

Example:

>>> lda = LdaMulticore(corpus, id2word=id2word, num_topics=100)  # train model
>>> print(lda[doc_bow]) # get topic probability distribution for a document
>>> lda.update(corpus2) # update the LDA model with additional documents
>>> print(lda[doc_bow])
bound(corpus, gamma=None, subsample_ratio=1.0)

Estimate the variational bound of documents from corpus: E_q[log p(corpus)] - E_q[log q(corpus)]

Parameters:
  • corpus – documents to infer variational bounds from.
  • gamma – the variational parameters on topic weights for each corpus document (=2d matrix=what comes out of inference()). If not supplied, will be inferred from the model.
  • subsample_ratio (float) – If corpus is a sample of the whole corpus, pass this to inform on what proportion of the corpus it represents. This is used as a multiplicative factor to scale the likelihood appropriately.
Returns:

The variational bound score calculated.

clear()

Clear model state (free up some memory). Used in the distributed algo.

diff(other, distance='kullback_leibler', num_words=100, n_ann_terms=10, diagonal=False, annotation=True, normed=True)

Calculate difference topic2topic between two Lda models other instances of LdaMulticore or LdaModel distance is function that will be applied to calculate difference between any topic pair. Available values: kullback_leibler, hellinger, jaccard and jensen_shannon num_words is quantity of most relevant words that used if distance == jaccard (also used for annotation) n_ann_terms is max quantity of words in intersection/symmetric difference between topics (used for annotation) diagonal set to True if the difference is required only between the identical topic no.s (returns diagonal of diff matrix) annotation whether the intersection or difference of words between two topics should be returned Returns a matrix Z with shape (m1.num_topics, m2.num_topics), where Z[i][j] - difference between topic_i and topic_j and matrix annotation (if True) with shape (m1.num_topics, m2.num_topics, 2, None), where:

annotation[i][j] = [[int_1, int_2, …], [diff_1, diff_2, …]] and int_k is word from intersection of topic_i and topic_j and diff_l is word from symmetric difference of topic_i and topic_j normed is a flag. If true, matrix Z will be normalized

Example:

>>> m1, m2 = LdaMulticore.load(path_1), LdaMulticore.load(path_2)
>>> mdiff, annotation = m1.diff(m2)
>>> print(mdiff) # get matrix with difference for each topic pair from `m1` and `m2`
>>> print(annotation) # get array with positive/negative words for each topic pair from `m1` and `m2`
do_estep(chunk, state=None)

Perform inference on a chunk of documents, and accumulate the collected sufficient statistics in state (or self.state if None).

do_mstep(rho, other, extra_pass=False)

M step: use linear interpolation between the existing topics and collected sufficient statistics in other to update the topics.

get_document_topics(bow, minimum_probability=None, minimum_phi_value=None, per_word_topics=False)
Parameters:
  • bow (list) – Bag-of-words representation of the document to get topics for.
  • minimum_probability (float) – Ignore topics with probability below this value (None by default). If set to None, a value of 1e-8 is used to prevent 0s.
  • per_word_topics (bool) – If True, also returns a list of topics, sorted in descending order of most likely topics for that word. It also returns a list of word_ids and each words corresponding topics’ phi_values, multiplied by feature length (i.e, word count).
  • minimum_phi_value (float) – if per_word_topics is True, this represents a lower bound on the term probabilities that are included (None by default). If set to None, a value of 1e-8 is used to prevent 0s.
Returns:

topic distribution for the given document bow, as a list of (topic_id, topic_probability) 2-tuples.

get_term_topics(word_id, minimum_probability=None)
Parameters:
  • word_id (int) – ID of the word to get topic probabilities for.
  • minimum_probability (float) – Only include topic probabilities above this value (None by default). If set to None, use 1e-8 to prevent including 0s.
Returns:

The most likely topics for the given word. Each topic is represented as a tuple of (topic_id, term_probability).

Return type:

list

get_topic_terms(topicid, topn=10)
Parameters:topn (int) – Only return 2-tuples for the topn most probable words (ignore the rest).
Returns:(word_id, probability) 2-tuples for the most probable words in topic with id topicid.
Return type:list
get_topics()
Returns:num_topics x vocabulary_size array of floats which represents the term topic matrix learned during inference.
Return type:np.ndarray
inference(chunk, collect_sstats=False)

Given a chunk of sparse document vectors, estimate gamma (parameters controlling the topic weights) for each document in the chunk.

This function does not modify the model (=is read-only aka const). The whole input chunk of document is assumed to fit in RAM; chunking of a large corpus must be done earlier in the pipeline.

If collect_sstats is True, also collect sufficient statistics needed to update the model’s topic-word distributions, and return a 2-tuple (gamma, sstats). Otherwise, return (gamma, None). gamma is of shape len(chunk) x self.num_topics.

Avoids computing the phi variational parameter directly using the optimization presented in Lee, Seung: Algorithms for non-negative matrix factorization, NIPS 2001.

init_dir_prior(prior, name)
load(fname, *args, **kwargs)

Load a previously saved object from file (also see save).

Large arrays can be memmap’ed back as read-only (shared memory) by setting mmap=’r’:

>>> LdaModel.load(fname, mmap='r')
log_perplexity(chunk, total_docs=None)

Calculate and return per-word likelihood bound, using the chunk of documents as evaluation corpus. Also output the calculated statistics. incl. perplexity=2^(-bound), to log at INFO level.

print_topic(topicno, topn=10)

Return a single topic as a formatted string. See show_topic() for parameters.

>>> lsimodel.print_topic(10, topn=5)
'-0.340 * "category" + 0.298 * "$M$" + 0.183 * "algebra" + -0.174 * "functor" + -0.168 * "operator"'
print_topics(num_topics=20, num_words=10)

Alias for show_topics() that prints the num_words most probable words for topics number of topics to log. Set topics=-1 to print all topics.

save(fname, ignore=('state', 'dispatcher'), separately=None, *args, **kwargs)

Save the model to file.

Large internal arrays may be stored into separate files, with fname as prefix.

separately can be used to define which arrays should be stored in separate files.

ignore parameter can be used to define which variables should be ignored, i.e. left out from the pickled lda model. By default the internal state is ignored as it uses its own serialisation not the one provided by LdaModel. The state and dispatcher will be added to any ignore parameter defined.

Note: do not save as a compressed file if you intend to load the file back with mmap.

Note: If you intend to use models across Python 2/3 versions there are a few things to keep in mind:

  1. The pickled Python dictionaries will not work across Python versions
  2. The save method does not automatically save all np arrays using np, only those ones that exceed sep_limit set in gensim.utils.SaveLoad.save. The main concern here is the alpha array if for instance using alpha=’auto’.

Please refer to the wiki recipes section (https://github.com/piskvorky/gensim/wiki/Recipes-&-FAQ#q9-how-do-i-load-a-model-in-python-3-that-was-trained-and-saved-using-python-2) for an example on how to work around these issues.

show_topic(topicid, topn=10)
Parameters:topn (int) – Only return 2-tuples for the topn most probable words (ignore the rest).
Returns:of (word, probability) 2-tuples for the most probable words in topic topicid.
Return type:list
show_topics(num_topics=10, num_words=10, log=False, formatted=True)
Parameters:
  • num_topics (int) – show results for first num_topics topics. Unlike LSA, there is no natural ordering between the topics in LDA. The returned num_topics <= self.num_topics subset of all topics is therefore arbitrary and may change between two LDA training runs.
  • num_words (int) – include top num_words with highest probabilities in topic.
  • log (bool) – If True, log output in addition to returning it.
  • formatted (bool) – If True, format topics as strings, otherwise return them as (word, probability) 2-tuples.
Returns:

num_words most significant words for num_topics number of topics (10 words for top 10 topics, by default).

Return type:

list

sync_state()
top_topics(corpus=None, texts=None, dictionary=None, window_size=None, coherence='u_mass', topn=20, processes=-1)

Calculate the coherence for each topic; default is Umass coherence.

See the gensim.models.CoherenceModel constructor for more info on the parameters and the different coherence metrics.

Returns:tuples with (topic_repr, coherence_score), where topic_repr is a list of representations of the topn terms for the topic. The terms are represented as tuples of (membership_in_topic, token). The coherence_score is a float.
Return type:list
update(corpus, chunks_as_numpy=False)

Train the model with new documents, by EM-iterating over corpus until the topics converge (or until the maximum number of allowed iterations is reached). corpus must be an iterable (repeatable stream of documents),

The E-step is distributed into the several processes.

This update also supports updating an already trained model (self) with new documents from corpus; the two models are then merged in proportion to the number of old vs. new documents. This feature is still experimental for non-stationary input streams.

For stationary input (no topic drift in new documents), on the other hand, this equals the online update of Hoffman et al. and is guaranteed to converge for any decay in (0.5, 1.0>.

update_alpha(gammat, rho)

Update parameters for the Dirichlet prior on the per-document topic weights alpha given the last gammat.

update_eta(lambdat, rho)

Update parameters for the Dirichlet prior on the per-topic word weights eta given the last lambdat.

gensim.models.ldamulticore.worker_e_step(input_queue, result_queue)

Perform E-step for each (chunk_no, chunk, model) 3-tuple from the input queue, placing the resulting state into the result queue.