A Simple yet Effective Framework for Active Learning to Rank | |
Qingzhong Wang, Haifang Li, Haoyi Xiong, Wen Wang, Jiang Bian, Yu Lu, Shuaiqiang Wang, Zhicong Cheng, Dejing Dou, Dawei Yin | |
刊名 | Machine Intelligence Research |
2024 | |
卷号 | 21期号:1页码:169-183 |
关键词 | Search, information retrieval, learning to rank, active learning, query by committee |
ISSN号 | 2731-538X |
DOI | 10.1007/s11633-023-1422-z |
英文摘要 | While China has become the largest online market in the world with approximately 1 billion internet users, Baidu runs the world′s largest Chinese search engine serving more than hundreds of millions of daily active users and responding to billions of queries per day. To handle the diverse query requests from users at the web-scale, Baidu has made tremendous efforts in understanding users′ queries, retrieving relevant content from a pool of trillions of webpages, and ranking the most relevant webpages on the top of the results. Among the components used in Baidu search, learning to rank (LTR) plays a critical role and we need to timely label an extremely large number of queries together with relevant webpages to train and update the online LTR models. To reduce the costs and time consumption of query/webpage labelling, we study the problem of active learning to rank (active LTR) that selects unlabeled queries for annotation and training in this work. Specifically, we first investigate the criterion – Ranking entropy (RE) characterizing the entropy of relevant webpages under a query produced by a sequence of online LTR models updated by different checkpoints, using a query-by-committee (QBC) method. Then, we explore a new criterion namely prediction variances (PV) that measures the variance of prediction results for all relevant webpages under a query. Our empirical studies find that RE may favor low-frequency queries from the pool for labelling while PV prioritizes high-frequency queries more. Finally, we combine these two complementary criteria as the sample selection strategies for active learning. Extensive experiments with comparisons to baseline algorithms show that the proposed approach could train LTR models to achieve higher discounted cumulative gain (i.e., the relative improvement ∆DCG4 = 1.38%) with the same budgeted labelling efforts. |
内容类型 | 期刊论文 |
源URL | [http://ir.ia.ac.cn/handle/173211/54581] |
专题 | 自动化研究所_学术期刊_International Journal of Automation and Computing |
作者单位 | Baidu Incorporated, Beijing 100085, China |
推荐引用方式 GB/T 7714 | Qingzhong Wang, Haifang Li, Haoyi Xiong, Wen Wang, Jiang Bian, Yu Lu, Shuaiqiang Wang, Zhicong Cheng, Dejing Dou, Dawei Yin. A Simple yet Effective Framework for Active Learning to Rank[J]. Machine Intelligence Research,2024,21(1):169-183. |
APA | Qingzhong Wang, Haifang Li, Haoyi Xiong, Wen Wang, Jiang Bian, Yu Lu, Shuaiqiang Wang, Zhicong Cheng, Dejing Dou, Dawei Yin.(2024).A Simple yet Effective Framework for Active Learning to Rank.Machine Intelligence Research,21(1),169-183. |
MLA | Qingzhong Wang, Haifang Li, Haoyi Xiong, Wen Wang, Jiang Bian, Yu Lu, Shuaiqiang Wang, Zhicong Cheng, Dejing Dou, Dawei Yin."A Simple yet Effective Framework for Active Learning to Rank".Machine Intelligence Research 21.1(2024):169-183. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论