Subspace Video Stabilization

Feng Liu1, Michael Gleicher2, Jue Wang3, Hailin Jin3 and Aseem Agarwala3

1Computer Science Department, Portland State University

2Computer Sciences Department, University of Wisconsin-Madison

3Adobe Systems Inc.

Abstract

We present a robust and efficient approach to video stabilization that achieves high-quality camera motion for a wide range of videos. In this paper, we focus on the problem of transforming a set of input 2D motion trajectories so that they are both smooth and resemble visually plausible views of the imaged scene; our key insight is that we can achieve this goal by enforcing subspace constraints on feature trajectories while smoothing them. Our approach assembles tracked features in the video into a trajectory matrix, factors it into two low-rank matrices, and performs filtering or curve fitting in a low-dimensional linear space. In order to process long videos, we propose a moving factorization that is both efficient and streamable. Our experiments confirm that our approach can efficiently provide stabilization results comparable with prior 3D methods in cases where those methods succeed, but also provides smooth camera motions in cases where such approaches often fail, such as videos that lack parallax. The presented approach offers the first method that both achieves high-quality video stabilization and is practical enough for consumer applications.

Paper
Feng Liu, Michael Gleicher, Jue Wang, Hailin Jin and Aseem Agarwala. Subspace Video Stabilization
ACM Transactions on Graphics (presented at SIGGRAPH 2011). Vol. 30, Issue 1, 2011: 4:1-4:10.  PDF  
Related Projects
Chengzhou Tang, Oliver Wang, Feng Liu, Ping Tan. Joint Direction and Stabilization for 360° Videos.
ACM Transactions on Graphics, 2019.  PDF
Feng Liu, Yuzhen Niu and Hailin Jin. Joint Subspace Stabilization for Stereoscopic Video
IEEE ICCV 2013. PDF  Website
Feng Liu, Michael Gleicher, Hailin Jin and Aseem Agarwala. Content-Preserving Warps for 3D Video Stabilization. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2009), 2009. PDF Website
Yu-Shuen Wang, Feng Liu, Pu-Sheng Hsu, and Tong-Yee Lee. Spatially and Temporally Optimized Video Stabilization.
IEEE Transactions on Visualization and Computer Graphics, 2013. Video Website
Michael Gleicher and Feng Liu. Re-cinematography: Improving the Camerawork of Casual Video. ACM Transactions on Multimedia Computing, Communications, and Applications. Vol. 5, Issue 1, Oct. 2008:1-28. PDF Website
Demo Video 
Download        

Watch it below   

Search engine friendly content

Supplemental Video Set 1
3 of the 15 videos we mentioned in our paper where Voodoo failed. We show the comparison between Liu et al. 09 and our method as well.

Example 1: This video suffers from strong rolling shutter artifacts
Input Liu et al. 09 vs Our method
Example 2: The camera pans only with little translation
Input Liu et al. 09 vs Our method
Example 3: SFM fails on this video since the scene is basically a large plane.
Input Liu et al. 09 vs Our method

Supplemental Video Set 2
The 3 new videos suggested by the reviewer and mentioned in our paper that our method failed due to lack of a sufficient number of long feature trajectories. Since the moving factorization step of our method failed, the warping step of our method did not have smooth feature trajectories to guide warping. So our method could not produce any results for this category of failure.

Example 1: In some frames of this video, people passing by occlude the camera occasionally. The number of long feature trajectories is dramatically reduced in facorization windows that contain any of these frames.
Input
Example 2: A few frames of this video have severe motion blurring artifacts due to occasional excessive shake. The number of long feature trajectories is dramatically reduced in factorization windows that contain any of these frames.
Input
Example 3: In the middle of this video, a train passing by occludes the camera quickly. The number of long feature trajectories is dramatically reduced in factorization windows during that period. In the later part of the video, the train that lacks texture dominates the scene, which again reduces the number of long feature trajectories.
Input

Supplemental Video 3

This video shows when there are an excessive amount of scene dynmaics, even with a sufficient amount of long feature trajectories, our method can still fail due to the fact that a single subspace cannot account for both the feature trajectories in the motion area and background.

Input Our result
Supplemental Video Set 4
Here we demonstrate the effect of step 3 of our algorithm by comparing the simple Gaussian smoothing to our eigen-trajectories smoothing. Specifically, we use 3 different Gaussian kernel sizes, 40, 50 and 60 frames. (The default window size for our results reported in this paper is 50 frames.)

18AF:


Input

  40 frames 50 frames 60 frames
Ours
Simple smoothing

47E2:


Input

  40 frames 50 frames 60 frames
Ours
Simple smoothing