gensim logo

gensim
gensim tagline

Get Expert Help From The Gensim Authors

Consulting in Machine Learning & NLP

• Commercial document similarity engine: ScaleText.ai

Corporate trainings in Python Data Science and Deep Learning

models.lsi_dispatcher – Dispatcher for distributed LSI

models.lsi_dispatcher – Dispatcher for distributed LSI

Dispatcher process which orchestrates distributed LsiModel computations. Run this script only once, on the master node in your cluster.

Notes

The dispatches expects to find worker scripts already running. Make sure you run as many workers as you like on your machines before launching the dispatcher.

Warning

Requires installed Pyro4. Distributed version works only in local network.

How to use distributed LsiModel

  1. Install needed dependencies (Pyro4)

    pip install gensim[distributed]
    
  2. Setup serialization (on each machine)

    export PYRO_SERIALIZERS_ACCEPTED=pickle
    export PYRO_SERIALIZER=pickle
    
  3. Run nameserver

    python -m Pyro4.naming -n 0.0.0.0 &
    
  4. Run workers (on each machine)

    python -m gensim.models.lsi_worker &
    
  5. Run dispatcher

    python -m gensim.models.lsi_dispatcher &
    
  6. Run LsiModel in distributed mode

    >>> from gensim.test.utils import common_corpus, common_dictionary
    >>> from gensim.models import LsiModel
    >>>
    >>> model = LsiModel(common_corpus, id2word=common_dictionary, distributed=True)
    

Command line arguments

...
positional arguments:
  maxsize     Maximum number of jobs to be kept pre-fetched in the queue.

optional arguments:
  -h, --help  show this help message and exit
class gensim.models.lsi_dispatcher.Dispatcher(maxsize=0)

Bases: object

Dispatcher object that communicates and coordinates individual workers.

Warning

There should never be more than one dispatcher running at any one time.

Partly initializes the dispatcher.

A full initialization (including initialization of the workers) requires a call to initialize()

Parameters:maxsize (int, optional) – Maximum number of jobs to be kept pre-fetched in the queue.
exit()

Terminate all registered workers and then the dispatcher.

getjob(worker_id)

Atomically pops a job from the queue.

Parameters:worker_id (int) – The worker that requested the job.
Returns:The corpus in BoW format.
Return type:iterable of iterable of (int, float)
getstate()

Merge projections from across all workers and get the final projection.

Returns:The current projection of the total model.
Return type:Projection
getworkers()

Get pyro URIs of all registered workers.

Returns:The pyro URIs for each worker.
Return type:list of URIs
initialize(**model_params)

Fully initializes the dispatcher and all its workers.

Parameters:**model_params – Keyword parameters used to initialize individual workers, see LsiModel.
Raises:RuntimeError – When no workers are found (the gensim.scripts.lsi_worker script must be ran beforehand).
jobdone(*args, **kwargs)

Callback used by workers to notify when their job is done.

The job done event is logged and then control is asynchronously transfered back to the worker (who can then request another job). In this way, control flow basically oscillates between gensim.models.lsi_dispatcher.Dispatcher.jobdone() and gensim.models.lsi_worker.Worker.requestjob().

Parameters:workerid (int) – The ID of the worker that finished the job (used for logging).
jobsdone()

Wrap _jobsdone, needed for remote access through proxies.

Returns:Number of jobs already completed.
Return type:int
putjob(job)

Atomically add a job to the queue.

Parameters:job (iterable of iterable of (int, float)) – The corpus in BoW format.
reset()

Re-initialize all workers for a new decomposition.