Apologies if this question is very silly, but I cannot wrap my head around the differences – in practice – between EMA decay and LR decay.
It feels to me like they both accomplish the same thing, just in different ways (the following is likely wrong, so I apologise in advance, and please correct me if my understanding is totally wrong):
- Using EMA, one keeps a separate copy of the model during training, and every N steps, updates the model with the average of the weights of the original model.
- Using LR decay, the weights of the original model are updated always less during training, but only one model is effectively trained.
Now, given a dataset with 32 samples, I can imagine this is how the two training would go:
Training A (no EMA)
Given the following hyperparameters:
- LR of 1e-5
- Batch size of 4
- Linear scheduler
After 4 steps, the model will have seen 16 samples, and the LR will have gone down to half the final LR.
Effectively, the model has been updated 4 times.
Training B (EMA)
Given the following hyperparameters:
- LR of 1e-5
- Batch size of 1
- Constant scheduler
- EMA decay of 0.9999
- EMA update steps of 4
After 16 steps, the model will have seen 16 samples, and the EMA decay will have gone up to half the final EMA decay.
Effectively, the original model has been updated 16 times, and the EMA model has been updated 4 times.
Question
Ultimately, both models have been updated 4 times, and the only difference I can see is that the training A updated the weights directly, whereas the training B updated the weights of both the original model and the EMA model.
Why would one decide to go with training B over training A?