We present a real-time system that outputs a multi-layer augmented video from a normal video stream. The input sequence is captured in a known environment: a panoramic image (the reference background) is provided before shooting. We use background tracking to register each input frame with this reference, and background subtraction to segment the foreground objects. Our background tracking method is a coarse-to-fine integration of three state-of-the-art algorithms that we make resistant to occlusions. In particular, we introduce an algebraic technique to properly adapt the Jurie and Dhome (JD) tracker (the most robust of them). We report experimental results on real and synthetic data which validate our approach. We generate our augmented video sequences by compositing layers: a natural or synthetic background and several natural or synthetic foregrounds. Copyright © 2006 John Wiley & Sons, Ltd.
張貼者： sinecosine 於 上午5:48