listopt: learning to optimize for xml ranking | |
Gao Ning ; Deng Zhi-Hong ; Yu Hang ; Jiang Jia-Jian | |
2011 | |
会议名称 | 15th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2011 |
会议日期 | 24-May-20 |
会议地点 | Shenzhen, China |
关键词 | Adaptive boosting Data mining Information retrieval Neural networks XML |
页码 | 482-492 |
英文摘要 | Many machine learning classification technologies such as boosting, support vector machine or neural networks have been applied to the ranking problem in information retrieval. However, since the purpose of these learning-to-rank methods is to directly acquire the sorted results based on the features of documents, they are unable to combine and utilize the existing ranking methods proven to be effective such as BM25 and PageRank. To solve this defect, we conducted a study on learning-to-optimize, which is to construct a learning model or method for optimizing the free parameters in ranking functions. This paper proposes a listwise learning-to-optimize process ListOPT and introduces three alternative differentiable query-level loss functions. The experimental results on the XML dataset of Wikipedia English show that these approaches can be successfully applied to tuning the parameters used in an existing highly cited ranking function BM25. Furthermore, we found that the formulas with optimized parameters indeed improve the effectiveness compared with the original ones. © 2011 Springer-Verlag. |
收录类别 | EI |
会议录 | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
会议录出版地 | Germany |
语种 | 英语 |
ISSN号 | 3029743 |
ISBN号 | 9783642208461 |
内容类型 | 会议论文 |
源URL | [http://124.16.136.157/handle/311060/14269] |
专题 | 软件研究所_软件所图书馆_会议论文 |
推荐引用方式 GB/T 7714 | Gao Ning,Deng Zhi-Hong,Yu Hang,et al. listopt: learning to optimize for xml ranking[C]. 见:15th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2011. Shenzhen, China. 24-May-20. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论