Paper ID | ARS-2.6 | ||
Paper Title | DYNAMIC DUAL SAMPLING MODULE FOR FINE-GRAINED SEMANTIC SEGMENTATION | ||
Authors | Chen Shi, Shanghai Jiao Tong University, China; Xiangtai Li, Peking University, China; Yanran Wu, Shanghai Jiao Tong University, China; Yunhai Tong, Peking University, China; Yi Xu, Shanghai Jiao Tong University, China | ||
Session | ARS-2: Image and Video Segmentation | ||
Location | Area I | ||
Session Time: | Monday, 20 September, 15:30 - 17:00 | ||
Presentation Time: | Monday, 20 September, 15:30 - 17:00 | ||
Presentation | Poster | ||
Topic | Image and Video Analysis, Synthesis, and Retrieval: Image & Video Interpretation and Understanding | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Representation of semantic context and local details is the essential issue for building modern semantic segmentation models. However, the interrelationship between semantic context and local details is not well explored in previous works. In this paper, we propose a Dynamic Dual Sampling Module (DDSM) to conduct dynamic affinity modeling and propagate semantic context to local details, which yields a more discriminative representation. Specifically, a dynamic sampling strategy is used to sparsely sample representative pixels and channels in the higher layer, forming adaptive compact support for each pixel and channel in the lower layer. The sampled features with high semantics are aggregated according to the affinities and then propagated to detailed lower-layer features, leading to a fine-grained segmentation result with well-preserved boundaries. Experiment results on both Cityscapes and Camvid datasets validate the effectiveness and efficiency of the proposed approach. Code and models will be available at https://github.com/Fantasticarl/DDSM. |