Original article was published by /u/kk_ai on Deep Learning
For ML applications it comes difficult to manage some special kind of changes, that we call concept drift or covariate shift or data drift.
These can be detrimental to your model performance in prod as most concept drift related methods are very subjective to the nature of the problem.
So, how to prevent concept drift?
There are various strategies, including:
- Online learning
- Model re-training
- Re-sampling using instance selection
- Ensemble learning with model weighting
- Feature dropping
However, my question is – have you found any methods you personally use that might not be "conventional" but they work?
We dive deep into the topic in this article.