Paper ID | MMSP-3.6 |
Paper Title |
DISENTANGLING SUBJECT-DEPENDENT/-INDEPENDENT REPRESENTATIONS FOR 2D MOTION RETARGETING |
Authors |
Fanglu Xie, Go Irie, Tatsushi Matsubayashi, Nippon Telegraph and Telephone Corporation, Japan |
Session | MMSP-3: Multimedia Synthesis and Enhancement |
Location | Gather.Town |
Session Time: | Wednesday, 09 June, 14:00 - 14:45 |
Presentation Time: | Wednesday, 09 June, 14:00 - 14:45 |
Presentation |
Poster
|
Topic |
Multimedia Signal Processing: Signal Processing for Multimedia Applications |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
We consider the problem of 2D motion retargeting, which is to transfer the motion of one 2D skeleton to another skeleton of a different body shape. Existing methods decompose the input motion skeleton into dynamic (motion) and static (body shape, viewpoint angle, and emotion) features and synthesize a new skeleton by mixing up the features extracted from the different data. However, the resulting motion skeletons do not reflect subject-dependent factors that can stylize motion, such as skill and expressions, leading to unattractive results. In this work, we propose a novel network to separate subject-dependent and -independent motion features and to reconstruct a new skeleton with or without subject-dependent motion features. The core of our approach is adversarial feature disentanglement. The motion features and a subject classifier are trained simultaneously such that subject-dependent motion features do allow for between-subject discrimination, whereas subject-independent features cannot. The presence or absence of individuality is readily controlled by a simple summation of the motion features. Our method shows superior performance to the state-of-the-art method in terms of reconstruction error and can generate new skeletons while maintaining individuality. |