factormatte.github.io - FactorMatte: Redefining Video Matting for Re-Composition Tasks

Description: FactorMatte makes foreground objects in real life videos invisible.

factormatte (1) factor matting (1)

Example domain paragraphs

We propose factor matting , an alternative formulation of the video matting problem in terms of counterfactual video synthesis that is better suited for re-composition tasks. The goal of factor matting is to separate the contents of video into independent components, each visualizing a counterfactual version of the scene where contents of other components have been removed. We show that factor matting maps well to a more general Bayesian framing of the matting problem that accounts for complex conditional i

Our method is trained per-video and requires neither pre-training on external large datasets, nor knowledge about the 3D structure of the scene. We conduct extensive experiments, and show that our method not only can disentangle scenes with complex interactions, but also outperforms top methods on existing tasks such as classical video matting and background subtraction. In addition, we demonstrate the benefits of our approach on a range of downstream tasks.

We reframe video matting in terms of counterfactual video synthesis for downstream re-compositing tasks, where each counterfactual video answers a questions of the form “what would this component look like if we froze time and separated it from the rest of the scene?” We developed a plug-in for Adobe After Effects for faster re-composition, and used it to produce results in the rightmost column.

Links to factormatte.github.io (1)