I have a question regarding Machine Learning in general. Consider the following scenario:
Given a piece of text, we want a program to know whether the text is ‘abusive’ or not.
To do this we can give the program 1000 text samples and manually mark which ones are positive and which ones are negative. The program studies these and records which words/patterns are common in abusive texts. We then give it another 1000 un-marked texts, and it manages to identify 95% of these correctly using the patterns it learnt from the original 1000.
That’s all good, but what about after that once the software ‘goes live’? That is, we leave it to pull another 1000 texts every day and it’s left to determine whether they are abusive or not on it’s own. One might think it would be a good idea to continue to recognise words/patterns in an attempt to ‘learn’ more and more each day? But the problem here is we don’t know for sure whether the program is identifying each text correctly. So if it marks a clean text as abusive, it will falsely record words/patterns as abusive. This will then cause the program’s intelligence to become more and more incorrect and off-track.
What is the general approach to the above problem?
Usually, such filters are programmed to output not only a yes/no value for each sample, but rather a probability: a sample may, for example, be reported as 95% likely to be abusive. Then you set two thresholds, to divide your results in three groups: very unlikely to be abusive, very likely to be abusive, and uncertain.
For example, you might consider probabilities between 20% and 80% uncertain, and those might contain, say, 10% of your samples. You take those samples, have someone classify them manually, and feed the results back to the system as training data (this is exactly what the typical Bayesian e-mail spam filters do when you manually mark something as spam or ham).
Over time, this will increase the accuracy of the predictions, so the number of samples in the “uncertain” category will decrease. If you like, you can now make the thresholds stricter, say, 5% and 95%.
1