I am writing mass spec data reduction software. The first things this software does are:
- load the raw data from a file, OR read all raw data files, store the data in a custom class
DataEntry
, and write it to a file - check for new raw data files and add those, if any
- separate out the raw data into ‘sequences’ and offer the user a list of sequences to select from
When the user only has a few thousand raw data files, this process is pretty efficient with whatever storage format I use, however, when that list grows to over 20,000, the processing time for things like loading the data or comparing current files in the data folder against the ones loaded previously can be quite long. For example, when using pickle
to store this data, loading the data and generating a list of sequences takes about nine seconds. If I use joblib
, my ADHD has kicked in and my focus has shifted to something else in the code by the time it loads.
The class itself is straightforward:
import re
"""
DataEntry class
Responsible for holding all raw data and calculations for a single analysis
When combined, this forms the all_data and filtered_data lists passed through the program
"""
class DataEntry:
def __init__(self, helium_number, analysis_label, timestamp, time_sec, PAB_data, raw_data, data_status):
self.helium_number = helium_number # unique analysis identifier
self.analysis_label = analysis_label # user-input analysis label
self.timestamp = timestamp # date and time analysis was initiated
self.PAB_data = PAB_data # pre-analysis baseline data
self.PAB_mean = {} # pre-analysis baseline mean (calc'd in he_stats.py)
self.raw_data = raw_data # raw mass spec intensity data
self.time_sec = time_sec # timestamps associated with raw_data intensity
self.analysis_type = self.determine_analysis_type(analysis_label) # analysis type
self.sequence_number = None # determined by identify_sequences()
self.averages = {} # raw data average - calc'd in he_stats.py
self.t0_intercept = {} # raw data t0 intercept - calc'd in he_stats.py
self.t0_uncertainty = {} # raw data t0 intercept uncertainty - calc'd in he_stats.py
self.t0_pctuncert = {} # ras data t0 intercept percent uncertainty - calc'd in he_stats.py
self.t0_slope = {} # raw data t0 intercept slope - calc'd in he_stats.py
self.ratio43_t0int = {} # mass 4 t0 to mass 3 t0 ratio - cal'd in he_stats.py
self.ratio43_uncert = {} # mass 4 t0 to mass 3 t0 ratio uncertainty calc'd in he_stats.py
self.ratio43_puncert = {} # mass 4 t0 to mass 3 t0 ratio uncertainty calc'd in he_stats.py
self.ratio43_slope = {} # mass 4 to mass 3 ratio slope for some fuckin reason
self.ncc = {} # ncc volume calculation, calc'd in pro_calcs.py
self.ncc_uncert = {} # ncc volume calculation uncertainty, calc'd in pro_calcs.py
self.active = True # inactive calculations (e.g. to remove a lineblank) this will be 0
self.sample_name = None # sample name extracted from analysis_label
self.extract_num = None # laser extraction number
self.reduced = False # whether or not the analysis data has been reduced
self.data_status = data_status # True = active, False = excluded as outlier
def determine_analysis_type(self, analysis_label):
q_pattern = re.compile(r'Qd+_') # pattern for Qs
d_pattern = re.compile(r'DTd+_') # pattern for DTs
if q_pattern.match(analysis_label):
return 'Q'
elif d_pattern.match(analysis_label):
return 'D'
elif analysis_label.startswith('LB'):
return 'lineblank'
elif analysis_label.startswith('CB') or
analysis_label.startswith('zCB') or
analysis_label.startswith('aCB'):
return 'coldblank'
elif analysis_label.startswith('HB') or
analysis_label.startswith('zHB') or
analysis_label.startswith('aHB') or
analysis_label.startswith('Pt-HB') or
analysis_label.startswith('Nb-HB') or
analysis_label.startswith('Nb-blank') or
analysis_label.startswith('Pt-blank'):
return 'hotblank'
else:
return 'unknown'
Here’s what a raw data file looks like:
He 47 Q19_[B] TP only 2017-04-07T04:28:52 time_sec 2.03 3.02 4.02 5.04 40.02
3.315 1.448681E-9 5.281470E-12 2.001811E-13 4.071695E-15 1.286626E-13
9.415 1.476452E-9 5.008861E-12 2.183648E-13 6.737729E-15 1.208360E-13
2.215 1.658686E-9 5.278870E-10 8.899774E-10 1.796788E-13 9.415092E-13
8.315 1.660783E-9 5.261807E-10 8.874056E-10 1.635680E-13 9.941707E-13
14.415 1.664095E-9 5.247728E-10 8.853450E-10 1.641234E-13 1.066927E-12
20.515 1.665474E-9 5.229162E-10 8.833687E-10 1.662138E-13 1.123520E-12
26.615 1.667263E-9 5.219367E-10 8.811534E-10 1.593481E-13 1.213396E-12
32.615 1.665697E-9 5.200574E-10 8.795005E-10 1.578915E-13 1.249829E-12
38.815 1.667614E-9 5.195811E-10 8.785309E-10 1.578315E-13 1.327653E-12
44.915 1.672711E-9 5.181769E-10 8.769504E-10 1.473274E-13 1.426288E-12
50.915 1.669182E-9 5.179599E-10 8.756890E-10 1.481697E-13 1.516924E-12
57.015 1.671832E-9 5.170346E-10 8.745485E-10 1.340317E-13 1.555628E-12
63.115 1.670865E-9 5.160504E-10 8.733799E-10 1.443163E-13 1.611698E-12
69.215 1.673459E-9 5.150870E-10 8.720002E-10 1.354246E-13 1.718194E-12
He 47 Q19_[B] TP only 2017-04-07T04:38:13
Each analysis contains a relatively small amount of data, and at this point the other variables defined in the class like t0_intercept
are empty.
So far, all I’ve tried successfully is pickle
and joblib
because the other formats don’t want to play nicely with my custom class.
At this point, I’ve determined that I need to re-pack the data into another format like h5, but I can’t decide
- which format would result in the shortest load times
- how to mutate my data into a format that is compatible with this format
- how to un-mutate the data back into its original format so that the rest of my program still understands it
I’d really appreciate some feedback from someone more experienced in these things!