gensim logo

gensim tagline

Get Expert Help

• machine learning, NLP, data mining

• custom SW design, development, optimizations

• corporate trainings & IT consulting

corpora.sharded_corpus – Corpus stored in separate files

corpora.sharded_corpus – Corpus stored in separate files

This module implements a corpus class that stores its data in separate files called “shards”. This is a compromise between speed (keeping the whole dataset in memory) and memory footprint (keeping the data on disk and reading from it on demand).

The corpus is intended for situations where you need to use your data as numpy arrays for some iterative processing (like training something using SGD, which usually involves heavy matrix multiplication).

class gensim.corpora.sharded_corpus.ShardedCorpus(output_prefix, corpus, dim=None, shardsize=4096, overwrite=False, sparse_serialization=False, sparse_retrieval=False, gensim=False)

Bases: gensim.corpora.indexedcorpus.IndexedCorpus

This corpus is designed for situations where you need to train a model on matrices, with a large number of iterations. (It should be faster than gensim’s other IndexedCorpus implementations for this use case; check the script. It should also serialize faster.)

The corpus stores its data in separate files called “shards”. This is a compromise between speed (keeping the whole dataset in memory) and memory footprint (keeping the data on disk and reading from it on demand). Persistence is done using the standard gensim load/save methods.


The dataset is read-only, there is - as opposed to gensim’s Similarity class, which works similarly - no way of adding documents to the dataset (for now).

You can use ShardedCorpus to serialize your data just like any other gensim corpus that implements serialization. However, because the data is saved as numpy 2-dimensional ndarrays (or scipy sparse matrices), you need to supply the dimension of your data to the corpus. (The dimension of word frequency vectors will typically be the size of the vocabulary, etc.)

>>> corpus = gensim.utils.mock_data()
>>> output_prefix = 'mydata.shdat'
>>> ShardedCorpus.serialize(output_prefix, corpus, dim=1000)

The output_prefix tells the ShardedCorpus where to put the data. Shards are saved as output_prefix.0, output_prefix.1, etc. All shards must be of the same size. The shards can be re-sized (which is essentially a re-serialization into new-size shards), but note that this operation will temporarily take twice as much disk space, because the old shards are not deleted until the new shards are safely in place.

After serializing the data, the corpus will then save itself to the file output_prefix.

On further initialization with the same output_prefix, the corpus will load the already built dataset unless the overwrite option is given. (A new object is “cloned” from the one saved to output_prefix previously.)

To retrieve data, you can load the corpus and use it like a list:

>>> sh_corpus = ShardedCorpus.load(output_prefix)
>>> batch = sh_corpus[100:150]

This will retrieve a numpy 2-dimensional array of 50 rows and 1000 columns (1000 was the dimension of the data we supplied to the corpus). To retrieve gensim-style sparse vectors, set the gensim property:

>>> sh_corpus.gensim = True
>>> batch = sh_corpus[100:150]

The batch now will be a generator of gensim vectors.

Since the corpus needs the data serialized in order to be able to operate, it will serialize data right away on initialization. Instead of calling ShardedCorpus.serialize(), you can just initialize and use the corpus right away:

>>> corpus = ShardedCorpus(output_prefix, corpus, dim=1000)
>>> batch = corpus[100:150]

ShardedCorpus also supports working with scipy sparse matrices, both during retrieval and during serialization. If you want to serialize your data as sparse matrices, set the sparse_serialization flag. For retrieving your data as sparse matrices, use the sparse_retrieval flag. (You can also retrieve densely serialized data as sparse matrices, for the sake of completeness, and vice versa.) By default, the corpus will retrieve numpy ndarrays even if it was serialized into sparse matrices.

>>> sparse_prefix = 'mydata.sparse.shdat'
>>> ShardedCorpus.serialize(sparse_prefix, corpus, dim=1000, sparse_serialization=True)
>>> sparse_corpus = ShardedCorpus.load(sparse_prefix)
>>> batch = sparse_corpus[100:150]
>>> type(batch)
<type 'numpy.ndarray'>
>>> sparse_corpus.sparse_retrieval = True
>>> batch = sparse_corpus[100:150]
<class 'scipy.sparse.csr.csr_matrix'>

While you can touch the sparse_retrieval attribute during the life of a ShardedCorpus object, you should definitely not touch ` sharded_serialization! Changing the attribute will not miraculously re-serialize the data in the requested format.

The CSR format is used for sparse data throughout.

Internally, to retrieve data, the dataset keeps track of which shard is currently open and on a __getitem__ request, either returns an item from the current shard, or opens a new one. The shard size is constant, except for the last shard.

Initializes the dataset. If output_prefix is not found, builds the shards.

  • output_prefix (str) –

    The absolute path to the file from which shard filenames should be derived. The individual shards will be saved as output_prefix.0, output_prefix.1, etc.

    The output_prefix path then works as the filename to which the ShardedCorpus object itself will be automatically saved. Normally, gensim corpora do not do this, but ShardedCorpus needs to remember several serialization settings: namely the shard size and whether it was serialized in dense or sparse format. By saving automatically, any new ShardedCorpus with the same output_prefix will be able to find the information about the data serialized with the given prefix.

    If you want to overwrite your data serialized with some output prefix, set the overwrite flag to True.

    Of course, you can save your corpus separately as well using the save() method.

  • corpus (gensim.interfaces.CorpusABC) – The source corpus from which to build the dataset.
  • dim (int) – Specify beforehand what the dimension of a dataset item should be. This is useful when initializing from a corpus that doesn’t advertise its dimension, or when it does and you want to check that the corpus matches the expected dimension. If `dim` is left unused and `corpus` does not provide its dimension in an expected manner, initialization will fail.
  • shardsize (int) – How many data points should be in one shard. More data per shard means less shard reloading but higher memory usage and vice versa.
  • overwrite (bool) – If set, will build dataset from given corpus even if output_prefix already exists.
  • sparse_serialization (bool) –

    If set, will save the data in a sparse form (as csr matrices). This is to speed up retrieval when you know you will be using sparse matrices.


    This property **should not change** during the lifetime of
    the dataset. (If you find out you need to change from a sparse
    to a dense representation, the best practice is to create
    another ShardedCorpus object.)
  • sparse_retrieval (bool) –

    If set, will retrieve data as sparse vectors (numpy csr matrices). If unset, will return ndarrays.

    Note that retrieval speed for this option depends on how the dataset was serialized. If sparse_serialization was set, then setting sparse_retrieval will be faster. However, if the two settings do not correspond, the conversion on the fly will slow the dataset down.

  • gensim (bool) – If set, will convert the output to gensim sparse vectors (list of tuples (id, value)) to make it behave like any other gensim corpus. This will slow the dataset down.

As opposed to getitem, this one only accepts ints as offsets.


Determine whether the given offset falls within the current shard.


Determine whether the given offset falls within the next shard. This is a very small speedup: typically, we will be iterating through the data forward. Could save considerable time with a very large number of smaller shards.


Initialize by copying over attributes of another ShardedCorpus instance saved to the output_prefix given at __init__().

init_shards(output_prefix, corpus, shardsize=4096, dtype='float64')

Initialize shards from the corpus.

classmethod load(fname, mmap=None)

Load itself in clean state. mmap has no effect here.


Load (unpickle) the n-th shard as the “live” part of the dataset into the Dataset object.


Reset to no shard at all. Used for saving.


Re-process the dataset to new shard size. This may take pretty long. Also, note that you need some space on disk for this one (we’re assuming there is enough disk space for double the size of the dataset and that there is enough memory for old + new shardsize).

Parameters:shardsize (int) – The new shard size.
save(*args, **kwargs)

Save itself (the wrapper) in clean state (after calling reset()) to the output_prefix file. If you wish to save to a different file, use the fname argument as the first positional arg.

static save_corpus(fname, corpus, id2word=None, progress_cnt=1000, metadata=False, **kwargs)

Implement a serialization interface. Do not call directly; use the serialize method instead.

Note that you might need some ShardedCorpus init parameters, most likely the dimension (dim). Again, pass these as kwargs to the serialize method.

All this thing does is initialize a ShardedCorpus from a corpus with the output_prefix argument set to the fname parameter of this method. The initialization of a ShardedCorpus takes care of serializing the data (in dense form) to shards.

Ignore the parameters id2word, progress_cnt and metadata. They currently do nothing and are here only to provide a compatible method signature with superclass.

save_shard(shard, n=None, filename=None)

Pickle the given shard. If n is not given, will consider the shard a new one.

If filename is given, will use that file name instead of generating one.

classmethod serialize(serializer, fname, corpus, id2word=None, index_fname=None, progress_cnt=None, labels=None, metadata=False, **kwargs)

Iterate through the document stream corpus, saving the documents as a ShardedCorpus to fname.

Use this method instead of calling save_corpus directly. You may need to supply some kwargs that are used upon dataset creation (namely: dim, unless the dataset can infer the dimension from the given corpus).

Ignore the parameters id2word, index_fname, progress_cnt, labels and metadata. They currently do nothing and are here only to provide a compatible method signature with superclass.


Determine which shard the given offset belongs to. If the offset is greater than the number of available documents, raises a ValueError.

Assumes that all shards have the same size.