I am interested in developing an LSTM model to make predictions about satellite imagery. I have a workflow in Google Earth Engine that generates TFRecord files in cloud storage. Each example is a series of Int64List or FloatList of the same shape (21 items, corresponding to 21 years of data). All pixels have some number of missing values where the target variable was not surveyed in that year. I know how to serve TFRecord files to a model with TFRecordDataset
, and how to do some amount of preprocessing with .filter
and .map
.
I would like to see how the model performs when I train it on different window sizes. I can find examples that have sufficient data for a given window size, and subset their features to continuous data accordingly. But then this filtering means that I can split one example into several. For example, if one pixel has 6 years of continuous data, I can generate 4 examples with a window size of 3 years, or 2 examples with a window size of 5 years. Given the size of my dataset, I would prefer to do this preprocessing when the data are loaded for model training instead of saving copies of the whole data.
My current approach is to make a TFRecordDataset
from the unprocessed TFRecord files, write a generator that does the preprocessing for a given window size, then pass that generator to tf.data.Dataset.from_generator
to create the dataset that the model interacts with. This works as below:
class TimeSeriesGenerator():
# omitting some details for brevity
self._ds = tf.data.TFRecordDataset(list_of_tfrecord_files)
def _generate_examples(self, example, window_size):
# Missing values are marked with zeros in the "year" feature
windows = np.lib.stride_tricks.sliding_window_view(example["year"], window_size)
start_idxs = np.where(np.sum(windows > 0, axis=1) == window_size)[0]
# Shuffle so sequences are not chronological
np.random.shuffle(start_idxs)
for idx in start_idxs:
yield {
key: example[key][idx:idx+window_size] for key in example
}
def generate_window(self, window_size):
# Return a generator that yields examples of the given window size
# TODO consider holding several Examples in memory so that one batch
# comes from many pixels.
def gen():
for example in self._ds:
for ex_windowed in self._generate_examples(example, window_size):
yield ex_windowed
return gen
Then I can just do regular batching like
window_size = 4
windowed_ds = tf.data.Dataset.from_generator(
generator=TimeSeriesGenerator().generate_window(window_size),
# dict of TensorSpecs with the new shape
output_signature=windowed_spec
)
And verify that it works as expected
> next(iter(windowed_ds))["year"]
<tf.Tensor: shape=(8, 4), dtype=int64, numpy=
array([[2010, 2011, 2012, 2013],
[2013, 2014, 2015, 2016],
[2016, 2017, 2018, 2019],
[2015, 2016, 2017, 2018],
[2012, 2013, 2014, 2015],
[2011, 2012, 2013, 2014],
[2014, 2015, 2016, 2017],
[2009, 2010, 2011, 2012]], dtype=int64)>
This feels hacky, and reading the documentation for from_generator
it seems that there are some important disadvantages, like preventing model serialization. Also, it takes about 200 ms of CPU time to generate the batch above, which seems slow.
Is there another way to preprocess Examples from a TFRecord as above?