Subspace Video Stabilization |
Feng Liu1, Michael Gleicher2, Jue Wang3, Hailin Jin3 and Aseem Agarwala3 |
1Computer Science Department, Portland State University |
2Computer Sciences Department, University of Wisconsin-Madison |
3Adobe Systems Inc. |
Abstract |
We present a robust and efficient approach to video stabilization that achieves high-quality camera motion for a wide range of videos. In this paper, we focus on the problem of transforming a set of input 2D motion trajectories so that they are both smooth and resemble visually plausible views of the imaged scene; our key insight is that we can achieve this goal by enforcing subspace constraints on feature trajectories while smoothing them. Our approach assembles tracked features in the video into a trajectory matrix, factors it into two low-rank matrices, and performs filtering or curve fitting in a low-dimensional linear space. In order to process long videos, we propose a moving factorization that is both efficient and streamable. Our experiments confirm that our approach can efficiently provide stabilization results comparable with prior 3D methods in cases where those methods succeed, but also provides smooth camera motions in cases where such approaches often fail, such as videos that lack parallax. The presented approach offers the first method that both achieves high-quality video stabilization and is practical enough for consumer applications. |
Paper |
Feng
Liu, Michael Gleicher, Jue Wang, Hailin Jin and Aseem
Agarwala. Subspace Video Stabilization. ACM Transactions on Graphics (presented at SIGGRAPH 2011). Vol. 30, Issue 1, 2011: 4:1-4:10. PDF |
Related Projects |
Chengzhou Tang, Oliver Wang,
Feng
Liu, Ping Tan.
Joint Direction and Stabilization for 360° Videos. ACM Transactions on Graphics, 2019. PDF |
Feng
Liu, Yuzhen Niu and Hailin Jin.
Joint Subspace Stabilization for Stereoscopic Video. IEEE ICCV 2013. PDF Website |
Feng Liu, Michael Gleicher, Hailin Jin and Aseem Agarwala. Content-Preserving Warps for 3D Video Stabilization. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2009), 2009. PDF Website |
Yu-Shuen Wang,
Feng Liu,
Pu-Sheng Hsu, and Tong-Yee Lee. Spatially and Temporally Optimized Video
Stabilization. IEEE Transactions on Visualization and Computer Graphics, 2013. Video Website |
Michael Gleicher and Feng Liu. Re-cinematography: Improving the Camerawork of Casual Video. ACM Transactions on Multimedia Computing, Communications, and Applications. Vol. 5, Issue 1, Oct. 2008:1-28. PDF Website |
Demo Video | ||||||||||||||||||||||||
Download | ||||||||||||||||||||||||
Watch it below |
||||||||||||||||||||||||
Supplemental Video Set 1 |
||||||||||||||||||||||||
Example 1: This video suffers from strong rolling shutter artifacts | ||||||||||||||||||||||||
|
||||||||||||||||||||||||
Example 2: The camera pans only with little translation | ||||||||||||||||||||||||
|
||||||||||||||||||||||||
Example 3: SFM fails on this video since the scene is basically a large plane. | ||||||||||||||||||||||||
|
||||||||||||||||||||||||
Supplemental Video Set 2 |
||||||||||||||||||||||||
Example 1: In some frames of this video, people passing by occlude the camera occasionally. The number of long feature trajectories is dramatically reduced in facorization windows that contain any of these frames. | ||||||||||||||||||||||||
|
||||||||||||||||||||||||
Example 2: A few frames of this video have severe motion blurring artifacts due to occasional excessive shake. The number of long feature trajectories is dramatically reduced in factorization windows that contain any of these frames. | ||||||||||||||||||||||||
|
||||||||||||||||||||||||
Example 3: In the middle of this video, a train passing by occludes the camera quickly. The number of long feature trajectories is dramatically reduced in factorization windows during that period. In the later part of the video, the train that lacks texture dominates the scene, which again reduces the number of long feature trajectories. | ||||||||||||||||||||||||
|
||||||||||||||||||||||||
Supplemental Video 3 This video shows when there are an excessive amount of scene dynmaics, even with a sufficient amount of long feature trajectories, our method can still fail due to the fact that a single subspace cannot account for both the feature trajectories in the motion area and background. |
||||||||||||||||||||||||
|
||||||||||||||||||||||||
Supplemental Video Set 4 Here we demonstrate the effect of step 3 of our algorithm by comparing the simple Gaussian smoothing to our eigen-trajectories smoothing. Specifically, we use 3 different Gaussian kernel sizes, 40, 50 and 60 frames. (The default window size for our results reported in this paper is 50 frames.) |
||||||||||||||||||||||||
18AF:
47E2:
|