I was playing around with AWS Sagemaker, trained a model with some labeled data, deployed it to endpoint and set up Lambda for serving predictions.
All good, but I want to re-train my model regularly, using, say, 1-week historical data.
But my historical data is unlabeled which means it cannot be used for training. How do I label it?
I initially thought that I can just use my model’s predictions to label new (unlabeled) data but I have read that this not a good idea because it would simply assure my model about its accuracy, even though it might be far from being accurate.
So where do I get the labels for my historical data?
And if historical data cannot be labeled by model, then does it mean that it is supposed to be labeled manually? In that case, what’s the point of training and serving a model then?
As an example, let’s take a fraudulent transaction detection. Ok, there is some initial data, labeled manually by someone who knows exactly if the transaction was fraudulent or not, and thus having 100% accuracy.
Is it then supposed to be periodically and manually updated with additional 100%-accurate events?
Unless you’re using Unsupervised Learning (or regression/forecasting where observations are available after the fact), traditionally you need lots of (expensive to acquire) labelled data to train a model. More recently “few-shot” or “one-shot” learning has emerged which (using a pre-trained model) can learn based on a handful of labelled examples – but you still need labelled data, just not nearly as much.
So your statement “does it mean that it is supposed to be labeled manually” .. “what’s the point of training and serving a model then?” doesn’t really make sense as you cannot train a model without labelled (at least some) data manually.
Secondly – there’s the issue of “data drift”. A model trained on data which has characteristics that change over time (fraud detection in particular because bad actors are always looking for new methods) will degrade in performance, so you need to monitor and retrain using new (labelled) data. Using the fraud detection example – if you notice that the model is missing some new fraud technique you need to find examples and label them, and then retrain the model. Also note, it’s highly unlikely that your original dataset will result in 100% accuracy anyway – the expert may get it wrong/miss some examples so the model will always have some uncertainty – especially with something that’s rare and hard to define.
9