摘要: Background subtraction is the most common technique to segment moving objects in a video sequence. We propose data driven background method which uses two models for each pixel: long term model and short model. The captures evolution of while adapts quickly rapidly changing conditions like swaying tree leaves or camera jitter. Each comprises collection previously observed pixel values. Two segmentation maps are generated based on whether current value finds required number matches with samples corresponding final mask obtained as an intersection two. Evaluation tests public CDnet dataset shows improved performance compared popular methods.