gensim logo

gensim tagline

Get Expert Help From The Gensim Authors

Consulting in Machine Learning & NLP

• Commercial document similarity engine:

Corporate trainings in Python Data Science and Deep Learning

models.coherencemodel – Topic coherence pipeline

models.coherencemodel – Topic coherence pipeline

Module for calculating topic coherence in python. This is the implementation of the four stage topic coherence pipeline from the paper [1]. The four stage pipeline is basically:

Segmentation -> Probability Estimation -> Confirmation Measure -> Aggregation.

Implementation of this pipeline allows for the user to in essence “make” a coherence measure of his/her choice by choosing a method in each of the pipelines.

[1]Michael Roeder, Andreas Both and Alexander Hinneburg. Exploring the space of topic coherence measures.
class gensim.models.coherencemodel.CoherenceModel(model=None, topics=None, texts=None, corpus=None, dictionary=None, window_size=None, keyed_vectors=None, coherence='c_v', topn=20, processes=-1)

Bases: gensim.interfaces.TransformationABC

Objects of this class allow for building and maintaining a model for topic coherence.

The main methods are:

  1. constructor, which initializes the four stage pipeline by accepting a coherence measure,
  2. the get_coherence() method, which returns the topic coherence.

Pipeline phases can also be executed individually. Methods for doing this are:

  1. segment_topics(), which performs segmentation of the given topics into their comparison sets.
  2. estimate_probabilities(), which accumulates word occurrence stats from the given corpus or texts.
    The output of this is also cached on the CoherenceModel, so calling this method can be used as a precomputation step for the next phase.
  3. get_coherence_per_topic(), which uses the segmented topics and estimated probabilities to compute
    the coherence of each topic. This output can be used to rank topics in order of most coherent to least. Such a ranking is useful if the intended use case of a topic model is document exploration by a human. It is also useful for filtering out incoherent topics (keep top-n from ranked list).
  4. aggregate_measures(topic_coherences), which uses the pipeline’s aggregation method to compute
    the overall coherence from the topic coherences.

One way of using this feature is through providing a trained topic model. A dictionary has to be explicitly provided if the model does not contain a dictionary already:

cm = CoherenceModel(model=tm, corpus=corpus, coherence='u_mass')  # tm is the trained topic model

Another way of using this feature is through providing tokenized topics such as:

topics = [['human', 'computer', 'system', 'interface'],
          ['graph', 'minors', 'trees', 'eps']]
# note that a dictionary has to be provided.
cm = CoherenceModel(topics=topics, corpus=corpus, dictionary=dictionary, coherence='u_mass')

Model persistency is achieved via its load/save methods.

  • model – Pre-trained topic model. Should be provided if topics is not provided. Currently supports LdaModel, LdaMallet wrapper and LdaVowpalWabbit wrapper. Use ‘topics’ parameter to plug in an as yet unsupported model.
  • topics

    List of tokenized topics. If this is preferred over model, dictionary should be provided. eg:

    topics = [['human', 'machine', 'computer', 'interface'],
               ['graph', 'trees', 'binary', 'widths']]
  • texts

    Tokenized texts. Needed for coherence models that use sliding window based probability estimator, eg:

    texts = [['system', 'human', 'system', 'eps'],
             ['user', 'response', 'time'],
             ['graph', 'trees'],
             ['graph', 'minors', 'trees'],
             ['graph', 'minors', 'survey']]
  • corpus – Gensim document corpus.
  • dictionary – Gensim dictionary mapping of id word to create corpus. If model.id2word is present, this is not needed. If both are provided, dictionary will be used.
  • window_size

    Is the size of the window to be used for coherence measures using boolean sliding window as their probability estimator. For ‘u_mass’ this doesn’t matter. If left ‘None’ the default window sizes are used which are:

    ’c_v’ : 110 ‘c_uci’ : 10 ‘c_npmi’ : 10
  • coherence – Coherence measure to be used. Supported values are: ‘u_mass’ ‘c_v’ ‘c_uci’ also popularly known as c_pmi ‘c_npmi’ For ‘u_mass’ corpus should be provided. If texts is provided, it will be converted to corpus using the dictionary. For ‘c_v’, ‘c_uci’ and ‘c_npmi’ texts should be provided. Corpus is not needed.
  • topn – Integer corresponding to the number of top words to be extracted from each topic.
  • processes – number of processes to use for probability estimation phase; any value less than 1 will be interpreted to mean num_cpus - 1; default is -1.

Aggregate the individual topic coherence measures using the pipeline’s aggregation function.


Perform the coherence evaluation for each of the models.

This first precomputes the probabilities once, then evaluates coherence for each model.

Since we have already precomputed the probabilities, this simply involves using the accumulated stats in the CoherenceModel to perform the evaluations, which should be pretty quick.

Parameters:model_topics (list) – of lists of top-N words for the model trained with that number of topics.
of (avg_topic_coherences, avg_coherence).
These are the coherence values per topic and the overall model coherence.
Return type:list

Accumulate word occurrences and co-occurrences from texts or corpus using the optimal method for the chosen coherence metric. This operation may take quite some time for the sliding window based coherence methods.

classmethod for_models(models, dictionary, topn=20, **kwargs)

Initialize a CoherenceModel with estimated probabilities for all of the given models.

Parameters:models (list) – List of models to evalaute coherence of; the only requirement is that each has a get_topics methods.
classmethod for_topics(topics_as_topn_terms, **kwargs)

Initialize a CoherenceModel with estimated probabilities for all of the given topics.

Parameters:topics_as_topn_terms (list of lists) – Each element in the top-level list should be the list of topics for a model. The topics for the model should be a list of top-N words, one per topic.

Return coherence value based on pipeline parameters.

get_coherence_per_topic(segmented_topics=None, with_std=False, with_support=False)

Return list of coherence values for each topic based on pipeline parameters.

classmethod load(fname, mmap=None)

Load a previously saved object (using save()) from file.

  • fname (str) – Path to file that contains needed object.
  • mmap (str, optional) – Memory-map option. If the object was saved with large arrays stored separately, you can load these arrays via mmap (shared memory) using mmap=’r’. If the file being loaded is compressed (either ‘.gz’ or ‘.bz2’), then `mmap=None must be set.

See also


Returns:Object loaded from fname.
Return type:object
Raises:IOError – When methods are called on instance (should be called from class).
save(fname_or_handle, separately=None, sep_limit=10485760, ignore=frozenset([]), pickle_protocol=2)

Save the object to file.

  • fname_or_handle (str or file-like) – Path to output file or already opened file-like object. If the object is a file handle, no special array handling will be performed, all attributes will be saved to the same file.
  • separately (list of str or None, optional) – If None - automatically detect large numpy/scipy.sparse arrays in the object being stored, and store them into separate files. This avoids pickle memory errors and allows mmap’ing large arrays back on load efficiently. If list of str - this attributes will be stored in separate files, the automatic check is not performed in this case.
  • sep_limit (int) – Limit for automatic separation.
  • ignore (frozenset of str) – Attributes that shouldn’t be serialize/store.
  • pickle_protocol (int) – Protocol number for pickle.

See also


static top_topics_as_word_lists(dictionary, topn=20)