Paper ID | MMSP-7.4 |
Paper Title |
COOPNET: MULTI-MODAL COOPERATIVE GENDER PREDICTION IN SOCIAL MEDIA USER PROFILING |
Authors |
Lin Li, Kaixi Hu, Yunpei Zheng, Wuhan University of Technology, China; Jianquan Liu, NEC Corporation, Japan; Kong Aik Lee, Agency for Science, Technology and Research (A*STAR), Singapore |
Session | MMSP-7: Multimodal Perception, Integration and Multisensory Fusion |
Location | Gather.Town |
Session Time: | Friday, 11 June, 13:00 - 13:45 |
Presentation Time: | Friday, 11 June, 13:00 - 13:45 |
Presentation |
Poster
|
Topic |
Multimedia Signal Processing: Human Centric Multimedia |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
The principal way of performing user profiling is to investigate accumulated social media data. However, the problem of information asymmetry generally exists in user generated contents since users post multi-modal contents in social media freely. In this paper, we propose a novel text-image cooperation framework (COOPNet), a bridge connection network architecture that exchanges information between texts and images. First, we map the representations of both visual and sentiment enriched textual modalities into a cooperative semantic space to derive a cooperative representation. Next, the representations of texts and images are combined with their cooperative representation to exchange knowledge in the learning process. Finally, a multi-modal regression is leveraged to make cooperative decisions. Extensive experiments on the public PAN-2018 dataset demonstrate the efficacy of our framework over the state-of-the-art methods on the premise of automatic feature learning. |