Paper ID | MLSP-34.2 |
Paper Title |
MULTI-TASK LEARNING VIA SHARING INEXACT LOW-RANK SUBSPACE |
Authors |
Xiaoqian Wang, Purdue University, United States; Feiping Nie, University of Texas at Arlington, United States |
Session | MLSP-34: Subspace Learning and Applications |
Location | Gather.Town |
Session Time: | Thursday, 10 June, 15:30 - 16:15 |
Presentation Time: | Thursday, 10 June, 15:30 - 16:15 |
Presentation |
Poster
|
Topic |
Machine Learning for Signal Processing: [MLR-SBML] Subspace and manifold learning |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Multi-task learning algorithms enhance learning performance by exploring the relations among multiple tasks. By pooling data from different yet relevant tasks together, tasks can benefit from each other in this jointly learning mechanism. In this paper, we study the relations among multiple tasks by properly learning their shared common subspace. Previous works usually constrain the shared subspace to be low-rank since tasks are assumed to be intrinsically related. However, this constraint is too strict for real applications when noise exists. Instead, we propose to detect an inexact low-rank subspace, which provides an approximation of the low-rank subspace. This makes our learned multi-task parameter matrix more robust in the circumstances of noise. We use alternating optimization algorithm to optimize our new objective and obtain an algorithm with the same time complexity as the single task learning. We provide extensive empirical results on both synthetic and benchmark datasets to illustrate the superiority of our method over other related multi-task learning methods. Our method shows apparent robustness in high portion of noise. Moreover, it possesses a major superiority when few training data are available. This is important in practical use, especially when accessing more data involves arduous work. |