Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

On Penalization in Stochastic Multi-armed Bandits

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Additional Information
    • Publication Information:
      Preprint
    • Publication Information:
      Institute of Electrical and Electronics Engineers (IEEE), 2025.
    • Publication Date:
      2025
    • Abstract:
      We study an important variant of the stochastic multi-armed bandit (MAB) problem, which takes penalization into consideration. Instead of directly maximizing cumulative expected reward, we need to balance between the total reward and fairness level. In this paper, we present some new insights in MAB and formulate the problem in the penalization framework, where rigorous penalized regret can be well defined and more sophisticated regret analysis is possible. Under such a framework, we propose a hard-threshold UCB-like algorithm, which enjoys many merits including asymptotic fairness, nearly optimal regret, better tradeoff between reward and fairness. Both gap-dependent and gap-independent regret bounds have been established. Multiple insightful comments are given to illustrate the soundness of our theoretical analysis. Numerous experimental results corroborate the theory and show the superiority of our method over other existing methods.
    • ISSN:
      1557-9654
      0018-9448
    • Accession Number:
      10.1109/tit.2025.3525666
    • Accession Number:
      10.48550/arxiv.2211.08311
    • Rights:
      IEEE Copyright
      CC BY
    • Accession Number:
      edsair.doi.dedup.....ee3362ea615a8df867d532364507bddf