Paper ID | IVMSP-33.2 | ||
Paper Title | MULTI-DIRECTIONAL CONVOLUTION NETWORKS WITH SPATIAL-TEMPORAL FEATURE PYRAMID MODULE FOR ACTION RECOGNITION | ||
Authors | Bohong Yang, Zijian Wang, Wu Ran, Hong Lu, Fudan University, China; Yi-Ping Phoebe Chen, La Trobe University, China | ||
Session | IVMSP-33: Action Recognition | ||
Location | Gather.Town | ||
Session Time: | Friday, 11 June, 14:00 - 14:45 | ||
Presentation Time: | Friday, 11 June, 14:00 - 14:45 | ||
Presentation | Poster | ||
Topic | Image, Video, and Multidimensional Signal Processing: [IVSMR] Image & Video Sensing, Modeling, and Representation | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Recent attempts show that factorizing 3D convolutional filters into separate spatial and temporal components brings impressive improvement in action recognition. However, traditional temporal convolution operating along the temporal dimension will aggregate unrelated features, since the feature maps of fast-moving objects have shifted spatial positions. In this paper, we propose a novel and effective Multi-Directional Convolution (MDConv), which extracts features along different spatial-temporal orientations. Especially, MDConv has the same FLOPs and parameters as the traditional 1D temporal convolution. Also, we propose the Spatial-Temporal Features Pyramid Module (STFPM) to fuse spatial semantics in different scales in a light-weight way. Our extensive experiments show that the models which integrate with MDConv achieve better accuracy on several large-scale action recognition benchmarks such as Kinetics, AVA and SomethingSomething V1&V2 datasets. |