I work with very large binary data files of unsigned int16 values, which I access as read-only numpy.memmap
objects. A typical object will be a 2D array, with many thousands of rows, and maybe 1000 columns. As there can be hundreds of these to study in parallel, we do not want to read the files fully into memory. The numpy memmap is a very handy solution, letting us read large or small chunks into core for long or short times without us having to do the bookkeeping.
These data come from a physics experiment, and unfortunately some experiments are wired up backwards (this is an oversimplification–trust me that “rewiring” is not possible). The result is that in some files, data appear as the unsigned-integer-inverse of what we want: every 0 value on disk really signifies 0xffff, 1 means 0xfffe, etc.
Is there a model for how to wrap, subclass, or otherwise a np.memmap
such that ~
(or equivalently, np.invert
) is applied every time that memmap data is accessed? I don’t want to change every instance of code that read the memmap to have to ask “is this raw data set inverted?” I want the object to just know that it’s supposed to be inverted and do the right thing.
import numpy as np
true_values = np.arange(20, dtype=np.uint16)
disk_values = ~true_values
disk_values.astype(np.uint16).tofile("sad_inverted_values.bin")
raw_map = np.memmap("sad_inverted_file.bin", mode="readonly", dtype=np.uint16)
corrected = InvertedData(raw_map) # <-- how do I implement this?
# I want InvertedData to work so the following 4 lines...
a = 3*corrected
b = corrected - other_vector
c = corrected.mean()
d = corrected[:, 0:200].mean(axis=1)
# ...produce identical results to
a = 3*(~raw_map)
b = (~raw_map) - other_vector
c = (~raw_map).mean()
d = (~raw_map)[:, 0:200].mean(axis=1)
# or
a = 3*true_values
b = true_values - other_vector
c = true_values.mean()
d = true_values[:, 0:200].mean(axis=1)
I see that numpy’s array interoperability rules are complex, and subclassing can be, too. No surprise! The ndarray
and memmap
are complex, highly capable objects.
The key constraint is this: I can afford to read all values into memory at once from one of these objects (generally < 1 GB each), but I cannot afford to read all values from all such objects and hold them forever–there have to be too many open objects of this sort.
I don’t know what to try! Should I make an object that subclasses ndarray
and accepts and stores an open memmap
internally, then it overrides…what? Indexing? Ufuncs? What about broadcasting?
Am I asking for a simple solution to a problem that is much too large to support simple solutions?
This question is related, and the solution is suggestive, but “almost certainly will fail in novel and unexpected ways”. So that’s not ideal.
JoeThePhysicist is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.