I have a problem how to use weights inside the TimeSeriesDataSet. I have error ValueError: tensor.ndim=2 > like.ndim=0.
Any help?
I have a dataset, where for each item I have 66 time steps. There are some items, which have NaN values in first time steps, means the item wasnt selling yet.
I cant change the NaN only to 0 – zero represent that the item was for sale but zero turnover.
i changed them to 0 and add weight 0/1 (exists, not exists).
I cant delete the NaN time steps – need to train on whole data (if I delete the time steps, the encoder maximum is the item with shortest time steps).
When I create a TimeSeriesDataSet
training = TimeSeriesDataSet(
data[lambda x: x.time_idx <= training_cutoff],
time_idx="time_idx",
target="ZMnožství",
group_ids =["ZKR1"],
max_encoder_length=max_encoder_length,
max_prediction_length=max_prediction_length,
static_categoricals = ["sezona MO", "ZKR1", "DRUH"],
time_varying_known_reals = ["month_sin","month_cos"],
time_varying_unknown_reals = ["ZMnožství"],
categorical_encoders={
"ZKR1": NaNLabelEncoder(add_nan=True).fit(data["ZKR1"]),
"sezona MO": NaNLabelEncoder(add_nan=True).fit(data["sezona MO"]),
"DRUH": NaNLabelEncoder(add_nan=True).fit(data["DRUH"]),
},
weight='weight'
)
weight column containt only values 0/1 (exists, not exists).
During training I have the error:
early_stop_callback = EarlyStopping(monitor="val_loss", min_delta=1e-4, patience=10, verbose=False, mode="min")
trainer = pl.Trainer(
max_epochs=30,
accelerator="cpu",
enable_model_summary=True,
gradient_clip_val=0.1,
callbacks=[early_stop_callback],
limit_train_batches=50,
enable_checkpointing=True,
)
net = DeepAR.from_dataset(
training,
learning_rate=1e-2,
log_interval=10,
log_val_interval=1,
hidden_size=30,
rnn_layers=2,
optimizer="Adam",
loss=MultivariateNormalDistributionLoss(rank=30),
)
trainer.fit(
net,
train_dataloaders=train_dataloader,
val_dataloaders=val_dataloader,
)
ValueError Traceback (most recent call last)
<ipython-input-272-c53c97c84889> in <cell line: 24>()
22 )
23
---> 24 trainer.fit(
25 net,
26 train_dataloaders=train_dataloader,
19 frames
/usr/local/lib/python3.10/dist-packages/pytorch_forecasting/utils/_utils.py in unsqueeze_like(tensor, like)
319 n_unsqueezes = like.ndim - tensor.ndim
320 if n_unsqueezes < 0:
--> 321 raise ValueError(f"tensor.ndim={tensor.ndim} > like.ndim={like.ndim}")
322 elif n_unsqueezes == 0:
323 return tensor
ValueError: tensor.ndim=2 > like.ndim=0
I tried to find better solution then to use weights, but without success.
So any idea how to solve the error weight or better typ how to solve generally the problem will be perfect.
I dont need to use DeepAR model, but in consideration HW and possibility it looks the best way.