Paper ID | HLT-11.2 |
Paper Title |
BOOSTING LOW-RESOURCE INTENT DETECTION WITH IN-SCOPE PROTOTYPICAL NETWORKS |
Authors |
Hongzhan Lin, Yuanmeng Yan, Guang Chen, Beijing University of Posts and Telecommunications, China |
Session | HLT-11: Language Understanding 3: Speech Understanding - General Topics |
Location | Gather.Town |
Session Time: | Thursday, 10 June, 13:00 - 13:45 |
Presentation Time: | Thursday, 10 June, 13:00 - 13:45 |
Presentation |
Poster
|
Topic |
Human Language Technology: [HLT-UNDE] Spoken Language Understanding and Computational Semantics |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Identifying intentions from users can help improve the response quality of task-oriented dialogue systems. How to use only limited labeled in-domain (ID) examples for zero-shot unknown intent detection and few-shot ID classification is a more challenging task in spoken language understanding. Existing related methods heavily rely upon the multi-domain datasets containing large-scale independent source domains for meta-training. In this paper, we propose a universal In-scope Prototypical Networks for low-resource intent detection to be general to dialogue meta-train datasets lacking widely-varying domains, which focuses on the scope of episodic intent classes to construct meta-task dynamically. Also, we introduce loss with margin principle to better distinguish samples. Experiments on two benchmark datasets show that our model consistently outperforms other baselines on zero-shot unknown intent detection without deteriorating the competitive performance on few-shot ID classification. |