Paper ID | MLSP-22.3 | ||
Paper Title | A UNIFIED APPROACH TO TRANSLATE CLASSICAL BANDIT ALGORITHMS TO STRUCTURED BANDITS | ||
Authors | Samarth Gupta, Shreyas Chaudhari, Carnegie Mellon University, United States; Subhojyoti Mukherjee, University of Wisconsin-Madison, United States; Gauri Joshi, Osman Yagan, Carnegie Mellon University, United States | ||
Session | MLSP-22: Sequential Learning | ||
Location | Gather.Town | ||
Session Time: | Wednesday, 09 June, 15:30 - 16:15 | ||
Presentation Time: | Wednesday, 09 June, 15:30 - 16:15 | ||
Presentation | Poster | ||
Topic | Machine Learning for Signal Processing: [MLR-SLER] Sequential learning; sequential decision methods | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | We consider a finite-armed structured bandit problem in which mean rewards of different arms are known functions of a common hidden parameter $\theta^*$. This problem setting subsumes several previously studied frameworks that assume linear or invertible reward functions. We propose a novel approach to gradually estimate the hidden $\theta^*$ and use the estimate together with the mean reward functions to substantially reduce exploration of sub-optimal arms. This approach enables us to fundamentally generalize any classic bandit algorithm including UCB and Thompson Sampling to the structured bandit setting. We prove via regret analysis that our proposed UCB-C and TS-C algorithms (structured bandit versions of UCB and Thompson Sampling, respectively) pull only a subset of the sub-optimal arms O(log T) times while the other sub-optimal arms (referred to as non-competitive arms) are pulled O(1) times. As a result, in cases where all sub-optimal arms are non-competitive, which can happen in many practical scenarios, the proposed algorithms achieve bounded regret. |