Paper ID | ARS-5.4 | ||
Paper Title | LEARNING NON-LINEAR DISENTANGLED EDITING FOR STYLEGAN | ||
Authors | Xu Yao, Alasdair Newson, Yann Gousseau, Telecom Paris, France; Pierre Hellier, Interdigital, France | ||
Session | ARS-5: Image and Video Synthesis, Rendering and Visualization | ||
Location | Area I | ||
Session Time: | Tuesday, 21 September, 08:00 - 09:30 | ||
Presentation Time: | Tuesday, 21 September, 08:00 - 09:30 | ||
Presentation | Poster | ||
Topic | Image and Video Analysis, Synthesis, and Retrieval: Image & Video Synthesis, Rendering, and Visualization | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Recent work has demonstrated the great potential of image editing in the latent space of powerful deep generative models such as StyleGAN. However, the success of such methods relies on the assumption that a linear hyperplane may separate the latent space into two subspaces for a binary attribute. In this work, we show that this hypothesis is a significant limitation and propose to learn a non-linear, regularized and identity-preserving latent space transformation that leads to more accurate and disentangled manipulations of facial attributes. |