Paper ID | IVMSP-19.5 |
Paper Title |
DNANet: Dense Nested Attention Network for Single Image Dehazing |
Authors |
Dongdong Ren, Artificial Intelligence Institute, Qilu University of Technology and School of Computer Science and Technology, Heilongjiang University, China; Jinbao Li, Qilu University of Technology (Shandong Academy of Sciences), Shandong Artificial Intelligence Institute, China; Meng Han, Data-driven Intelligence Research (DIR) Lab, Kennesaw State University, United States; Minglei Shu, Qilu University of Technology (Shandong Academy of Sciences), Shandong Artificial Intelligence Institute, China |
Session | IVMSP-19: Deraining and Dehazing |
Location | Gather.Town |
Session Time: | Thursday, 10 June, 13:00 - 13:45 |
Presentation Time: | Thursday, 10 June, 13:00 - 13:45 |
Presentation |
Poster
|
Topic |
Image, Video, and Multidimensional Signal Processing: [IVTEC] Image & Video Processing Techniques |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
In this paper, we propose an innovative approach, called Dense Nested Attention Network (DNANet), to directly restore a clear image from a hazy image with a new topology of connection paths. Firstly, through dense nested connections from inside to outside, the DNANet can fuse both shallow and deep features from fine to coarse, then strengthen the feature propagation and reuse to a large extent. We use stacked dilated convolutions, as the basic operation, to alleviate the shortcomings of the traditional context information aggregation methods. Secondly, we examine the weakness of skipping connections by reasoning the existence of residual haze from the shallow to deep layers in the neural network. To address this problem, we use the attention mechanism to filter out the output of residual haze by capturing the information relations on the entire skip feature maps. Thirdly, we introduce an adjustable loss constraint on each block of the outermost nested structure to gather more accurate features. The result demonstrates that DNANet outperforms state-of-the-art methods by a large margin on the benchmark datasets in extensive experiments. |