| Paper ID | IVMSP-22.1 |
| Paper Title |
STEREO RECTIFICATION BASED ON EPIPOLAR CONSTRAINED NEURAL NETWORK |
| Authors |
Yuxing Wang, Yawen Lu, Guoyu Lu, Rochester Institute of Technology, United States |
| Session | IVMSP-22: Image & Video Sensing, Modeling and Representation |
| Location | Gather.Town |
| Session Time: | Thursday, 10 June, 14:00 - 14:45 |
| Presentation Time: | Thursday, 10 June, 14:00 - 14:45 |
| Presentation |
Poster
|
| Topic |
Image, Video, and Multidimensional Signal Processing: [IVELI] Electronic Imaging |
| IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
| Virtual Presentation |
Click here to watch in the Virtual Conference |
| Abstract |
This paper proposes a novel deep neural network-based method for stereo image rectification. The neural network is mainly based on the theoretical basis of epipolar constraints from multi-view geometry and intensity constraints of images, which separately describes the relationship of the corresponding epipolar lines between a pair of image, including the epipolar-line slope and y-intercept consistency of the epipolar lines and the consistency of the corresponding intensity values between two images. Benefiting from the designed rectification framework together with a feature matching module to extract accurate corresponding key-points between views, our method is able to realize a stable and accurate stereo rectification process. Compared with classic feature-based rectification methods, our proposed method can rectify small errors, and achieve a much more accurate rectification performance. Experiments conducted on synthetic face dataset and real-world KITTI dataset demonstrate the effectiveness and robustness of the proposed method. |