H5py: Compound Datatypes And Scale-offset In The Compression Pipeline
Using Numpy and h5py, it is possible to create ‘compound datatype’ datasets to be stored in an hdf5-file: import h5py import numpy as np # # Create a new file using default pro
Solution 1:
Besides the h5py
docs, look at the hdf5
docs. They go into more detail. If the underlying file system does not support this, then the numpy interface won't either.
https://support.hdfgroup.org/HDF5/doc/UG/OldHtmlSource/10_Datasets.html#ScaleOffset
Elsewhere it says filters are applied to whole chunks.
The expression defining the compound type is pure numpy
. h5py
must be translating its descriptor into an equivalent hdf5
c-struc description. There are sample c and fortran compound types definitions.
All docs say that this offset
applies only to integer and float types. That can be understood as excluding string, vlen, and compound. What you are hoping is that it would still work with the numeric types inside a compound type. I don't think so.
Post a Comment for "H5py: Compound Datatypes And Scale-offset In The Compression Pipeline"