CORC  > 北京大学  > 数学科学学院
Online Learning as Stochastic Approximation of Regularization Paths: Optimality and Almost-Sure Convergence
Tarres, Pierre ; Yao, Yuan
2014
关键词Online learning stochastic approximations regularization path reproducing kernel Hilbert space ALGORITHMS REGRESSION COMPLEXITY
英文摘要In this paper, an online learning algorithm is proposed as sequential stochastic approximation of a regularization path converging to the regression function in reproducing kernel Hilbert spaces (RKHSs). We show that it is possible to produce the best known strong (RKHS norm) convergence rate of batch learning, through a careful choice of the gain or step size sequences, depending on regularity assumptions on the regression function. The corresponding weak (mean square distance) convergence rate is optimal in the sense that it reaches the minimax and individual lower rates in this paper. In both cases, we deduce almost sure convergence, using Bernstein-type inequalities for martingales in Hilbert spaces. To achieve this, we develop a bias-variance decomposition similar to the batch learning setting; the bias consists in the approximation and drift errors along the regularization path, which display the same rates of convergence, and the variance arises from the sample error analyzed as a (reverse) martingale difference sequence. The rates above are obtained by an optimal tradeoff between the bias and the variance.; Computer Science, Information Systems; Engineering, Electrical & Electronic; SCI(E); EI; 0; ARTICLE; tarres@maths.ox.ac.uk; yuany@math.pku.edu.cn; 9; 5716-5735; 60
语种英语
出处SCI ; EI
出版者ieee transactions on information theory
内容类型其他
源URL[http://hdl.handle.net/20.500.11897/208967]  
专题数学科学学院
推荐引用方式
GB/T 7714
Tarres, Pierre,Yao, Yuan. Online Learning as Stochastic Approximation of Regularization Paths: Optimality and Almost-Sure Convergence. 2014-01-01.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace