Skip to content Skip to sidebar Skip to footer

Why Pickle Eat Memory?

I trying to deal with writing huge amount of pickled data to disk by small pieces. Here is the example code: from cPickle import * from gc import collect PATH = r'd:\test.dat' @pr

Solution 1:

Pickle consume a lot of RAM, see explanations here : http://www.shocksolution.com/2010/01/storing-large-numpy-arrays-on-disk-python-pickle-vs-hdf5adsf/

Why does Pickle consume so much more memory? The reason is that HDF is a binary data pipe, while Pickle is an object serialization protocol. Pickle actually consists of a simple virtual machine (VM) that translates an object into a series of opcodes and writes them to disk. To unpickle something, the VM reads and interprets the opcodes and reconstructs an object. The downside of this approach is that the VM has to construct a complete copy of the object in memory before it writes it to disk.

Pickle is great for small use cases or testing because in most case the memory consumption doesn't matter a lot.

For intensive work where you have to dump and load a lot of files and/or big files you should consider using another way to store your data (ex.: hdf, wrote your own serialize/deserialize methods for your object, ...)


Post a Comment for "Why Pickle Eat Memory?"