Paper ID | MLR-APPL-BSIP.3 | ||
Paper Title | EEG BASED VISUAL CLASSIFICATION WITH MULTI-FEATURE JOINT LEARNING | ||
Authors | Xin Ma, Yiping Duan, Shuzhan Hu, Xiaoming Tao, Ning Ge, Tsinghua University, China | ||
Session | MLR-APPL-BSIP: Machine learning for biomedical signal and image processing | ||
Location | Area C | ||
Session Time: | Wednesday, 22 September, 08:00 - 09:30 | ||
Presentation Time: | Wednesday, 22 September, 08:00 - 09:30 | ||
Presentation | Poster | ||
Topic | Applications of Machine Learning: Machine learning for biomedical signal and image processing | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | With a significant boost in neuroscience and artificial intelligence, decoding the process of human vision has become a hot topic in the last few decades. Although many existing deep learning models are employed to explore and solve mysteries of human brain activity, the accuracy and reliability of the visual classification task based on electroencephalography (EEG) still have space for promotion. In our research, we design the experiments to collect the subjects’ EEG data when they are watching the different types of images. In this way, an image-EEG dataset corresponding to 80 ImageNet object classes was constructed. Afterward, we proposed a dual-EEGNet for joint feature learning for multi-category visual classification. Especially, one branch EEGNet is used to extract the spatio-temporal embeddings of EEG signals, and the other branch is used to extract the time-frequency embeddings of EEG signals. The experimental results demonstrate that EEG signals can reflect the human brain activity and distinguish the different types of images. Moreover, the proposed model with joint features has a better classification performance in terms of accuracy compared with other methods. |