Paper ID | IVMSP-14.5 |
Paper Title |
uTDN: An Unsupervised Two-Stream Dirichlet-Net for Hyperspectral Unmixing |
Authors |
Qiwen Jin, Yong Ma, Xiaoguang Mei, Wuhan University, China; Hao Li, Wuhan Polytechnic University, China; Jiayi Ma, Wuhan University, China |
Session | IVMSP-14: Hyperspectral Imaging |
Location | Gather.Town |
Session Time: | Wednesday, 09 June, 15:30 - 16:15 |
Presentation Time: | Wednesday, 09 June, 15:30 - 16:15 |
Presentation |
Poster
|
Topic |
Image, Video, and Multidimensional Signal Processing: [IVARS] Image & Video Analysis, Synthesis, and Retrieval |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Recently, the learning-based method has received much attention in the unsupervised hyperspectral unmixing, yet their ability to extract physically meaningful endmembers remains limited and the performance has not been satisfactory. In this paper, we propose a novel two-stream Dirichlet-net, termed as uTDN, to address the above problems. The weight-sharing architecture makes it possible to transfer the intrinsic properties of the endmembers during the process of unmixing, which can help to correct the network converging towards a more accurate and interpretable unmixing solution. Besides, the stick-breaking process is adopted to encourage the latent representation to follow a Dirichlet distribution, where the physical property of the estimated abundance can be naturally incorporated. Extensive experiments on both synthetic and real hyperspectral data demonstrate that the proposed uTDN can outperform the other state-of-the-art approaches. |