Paper ID | HLT-9.2 | ||
Paper Title | ALIGNING THE TRAINING AND EVALUATION OF UNSUPERVISED TEXT STYLE TRANSFER | ||
Authors | Wanhui Qian, Fuqing Zhu, Jinzhu Yang, Jizhong Han, Songlin Hu, Institute of Information Engineering, Chinese Academy of Sciences, China | ||
Session | HLT-9: Style and Text Normalization | ||
Location | Gather.Town | ||
Session Time: | Wednesday, 09 June, 16:30 - 17:15 | ||
Presentation Time: | Wednesday, 09 June, 16:30 - 17:15 | ||
Presentation | Poster | ||
Topic | Human Language Technology: [HLT-MLMD] Machine Learning Methods for Language | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | In the text style transfer task, models modify the attribute style of given texts while keeping the style-irrelevant content unchanged. Previous work has proposed many approaches on the non-parallel corpus (without style-to-style training pairs). These approaches are mostly motivated by heuristic intuition and fail to precisely control texts' attributes, such as the amount of preserved semantics, which leaves discrepancies between training and evaluation. This paper proposes a novel training method based on the evaluation metrics to address the discrepancy issue. Specifically, the model first evaluates different aspects of the transferred texts and provides the differentiable quality approximations by employing extra supervising modules. Then the model is optimized by bridging the gap between approximations and expectations. Extensive experiments conducted on two sentiment style datasets demonstrate the effectiveness of our proposal compared with some competitive baselines. |