Beyond Static Scenes: Camera-controllable Background Generation for Human Motion

Mingshuai Yao1,2    Mengting Chen2    Qinye Zhou2    Yabo Zhang1    Ming Liu1    Xiaoming Li1    Shaohui Liu1    Chen Ju2    Shuai Xiao2    Qingwen Liu2    Jinsong Lan2    Wangmeng Zuo1   
1Harbin Institute of Technology, Harbin, China  
2Taobao and Tmall Group  

Architecture

In this paper, we investigate the generation of new video backgrounds given a human foreground video, a camera pose, and a reference scene image. This task presents three key challenges. First, the generated background should precisely follow the camera movements corresponding to the human foreground. Second, as the camera shifts in different directions, newly revealed content should appear seamless and natural. Third, objects within the video frame should maintain consistent textures as the camera moves to ensure visual coherence. To address these challenges, we propose DynaScene, a new framework that uses camera poses extracted from the original video as an explicit control to drive background motion. Specifically, we design a multi-task learning paradigm that incorporates auxiliary tasks, namely background outpainting and scene variation, to enhance the realism of the generated backgrounds. Given the scarcity of suitable data, we constructed a large-scale, high-quality dataset tailored for this task, comprising video foregrounds, reference scene images, and corresponding camera poses. This dataset contains 200K video clips, ten times larger than existing real-world human video datasets, providing a significantly richer and more diverse training resource.