Pessimistic Off-Policy Optimization for Learning to Rank
Loading...
Date
2024-10-21
Authors
Čief, Matej
Kompan, Michal
ORCID
Advisor
Referee
Mark
Journal Title
Journal ISSN
Volume Title
Publisher
IOS Press
Altmetrics
Abstract
Off-policy learning is a framework for optimizing policies without deploying them, using data collected by another policy. In recommender systems, this is especially challenging due to the imbalance in logged data: some items are recommended and thus logged more frequently than others. This is further perpetuated when recommending a list of items, as the action space is combinatorial. To address this challenge, we study pessimistic off-policy optimization for learning to rank. The key idea is to compute lower confidence bounds on parameters of click models and then return the list with the highest pessimistic estimate of its value. This approach is computationally efficient, and we analyze it. We study its Bayesian and frequentist variants and overcome the limitation of unknown prior by incorporating empirical Bayes. To show the empirical effectiveness of our approach, we compare it to off-policy optimizers that use inverse propensity scores or neglect uncertainty. Our approach outperforms all baselines and is both robust and general.
Description
Citation
27TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE. 2024, p. 1896-1903.
https://ebooks.iospress.nl/volumearticle/69798
https://ebooks.iospress.nl/volumearticle/69798
Document type
Peer-reviewed
Document version
Published version
Date of access to the full text
Language of document
en
Study field
Comittee
Date of acceptance
Defence
Result of defence
Document licence
Creative Commons Attribution-NonCommercial 4.0 International
http://creativecommons.org/licenses/by-nc/4.0/
http://creativecommons.org/licenses/by-nc/4.0/